CN113779502A - Image processing evidence function estimation method based on correlation vector machine - Google Patents

Image processing evidence function estimation method based on correlation vector machine Download PDF

Info

Publication number
CN113779502A
CN113779502A CN202110963746.2A CN202110963746A CN113779502A CN 113779502 A CN113779502 A CN 113779502A CN 202110963746 A CN202110963746 A CN 202110963746A CN 113779502 A CN113779502 A CN 113779502A
Authority
CN
China
Prior art keywords
formula
function
vector
image
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110963746.2A
Other languages
Chinese (zh)
Other versions
CN113779502B (en
Inventor
邹大伟
马春华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suihua University
Original Assignee
Suihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suihua University filed Critical Suihua University
Priority to CN202110963746.2A priority Critical patent/CN113779502B/en
Publication of CN113779502A publication Critical patent/CN113779502A/en
Application granted granted Critical
Publication of CN113779502B publication Critical patent/CN113779502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/17Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Operations Research (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing evidence function estimation method based on a correlation vector machine. Step 1: the data in the image is proved to be normal distribution by using the mean value and covariance in the normal distribution according to the corrected weight parameter prior form; step 2: integrating the weight parameters of the data in the image according to a multivariate Taylor formula and a product of a likelihood function and the prior distribution of the weight to obtain a specific expression of an evidence function, namely an edge likelihood function; and step 3: and (3) based on the edge likelihood function of the data in the image in the step (2), maximizing the evidence function containing the hyper-parameters by utilizing a matrix calculus, a matrix algebra and an optimization method, thereby obtaining the optimization iterative algorithm of each hyper-parameter of the image. The method is used for solving the problems that complex integral is faced and calculation is difficult in the process of integrating the weight parameter by the product of the likelihood function and the prior distribution of the weight in image processing to obtain the evidence function.

Description

Image processing evidence function estimation method based on correlation vector machine
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image processing evidence function estimation method based on a correlation vector machine.
Background
In the estimation process of an evidence function related to a correlation vector machine in the field of image processing, the posterior distribution needs to be proved to be normal distribution, and the mean value and covariance of the normal distribution are calculated. Meanwhile, in the evidence function principle related to a correlation vector machine in the field of image processing, the prior distribution of weight parameters is the product of some normal distributions with zero mean value, which lacks generality.
In the process of integrating the weight parameter by the product of the likelihood function and the prior distribution of the weight to obtain the evidence function, complex integration has to be faced, and the calculation is difficult. How to find a new method and a logical framework for image processing to more simply and logically find the correlation integral of image processing and more effectively maximize the evidence function is relatively few researches in the aspect at present.
Disclosure of Invention
The invention provides an image processing evidence function estimation method based on a correlation vector machine, which is used for solving the problems that complex integrals have to be faced and the calculation is difficult in the process of integrating the weight parameters by the product of likelihood functions and the prior distribution of weights in image processing to obtain an evidence function.
The invention is realized by the following technical scheme:
an image processing evidence function estimation method based on a correlation vector machine, the evidence function estimation method comprises the following steps:
step 1: the data in the image is proved to be normal distribution by using the mean value and covariance in the normal distribution according to the corrected weight parameter prior form;
step 2: integrating the weight parameters of the data in the image according to a multivariate Taylor formula and a product of a likelihood function and the prior distribution of the weight to obtain a specific expression of an evidence function, namely an edge likelihood function;
and step 3: and (3) based on the edge likelihood function of the data in the image in the step (2), maximizing the evidence function containing the hyper-parameters by utilizing a matrix calculus, a matrix algebra and an optimization method, thereby obtaining the optimization iterative algorithm of each hyper-parameter of the image.
Further, the step 1 is specifically when
Figure BDA0003223100000000011
Wherein x is a scalar, A is an nxn invertible symmetric matrix, Tr (-) is a trace of the matrix,
Figure BDA0003223100000000021
Tr(xyT)=xTy, wherein Tr (·) is a trace of the matrix, x is a vector, and y is a vector; operator
Figure BDA0003223100000000022
Is defined as
Figure BDA0003223100000000023
And is
Figure BDA0003223100000000024
wherein k∈N;
let h be (h)1,h2)TAnd
Figure BDA0003223100000000025
wherein h1Is the first component of the vector h, h2Is the second component of the vector h, x1Is the first component of the vector x, x2As a second component of the vector x,
can obtain the product
Figure BDA0003223100000000026
And
Figure BDA0003223100000000027
when operator
Figure BDA0003223100000000028
Acting on x to obtain
Figure BDA0003223100000000029
Figure BDA00032231000000000210
Then the operator of the basis
Figure BDA00032231000000000211
Formula for the action of x, can be derived
Figure BDA00032231000000000212
Figure BDA00032231000000000213
wherein h=(h1,h2)TAnd
Figure BDA00032231000000000214
by using
Figure BDA00032231000000000215
Can obtain the product
Figure BDA00032231000000000216
Figure BDA00032231000000000217
Figure BDA0003223100000000031
If f is defined as f (w) → RTAw+wTb + c; wherein A is an nxn reversible symmetric matrix, b and w are n-dimensional column vectors, and c is a scalar; taylor of f (w)
Figure BDA0003223100000000032
Is of Taylor expansion type
Figure BDA0003223100000000033
The (i, j) th element of H is composed of
Figure BDA0003223100000000034
Defining;
the general form of a linear regression model in machine learning is
Figure BDA0003223100000000035
wherein φi(x) As a non-linear basis function of the input variable, w0Is a deviation parameter, x is an image data vector;
definition of phi0(x) 1, so that formula (1) can be rewritten as
Figure BDA0003223100000000036
wherein w=(w0,…,wM-1)TAnd phi (x) to (phi)0(x),…,φM-1(x))T
The objective function is a deterministic function y (x, w) with additive Gaussian noise, i.e.
t=y(x,w)+ε (3)
Where ε is the mean of 0 and the accuracy is the normal random variable of β, thereby obtaining
p(t|x,w,β)=N(t|y(x,w),β-1) (4)。
Further, the weight parameter in step 1 is in the form of a priori
Figure BDA0003223100000000037
Where α is the precision (inverse of variance) vector, α ═ α1,…,αM)TAnd γ is the mean vector, γ ═ γ1,…,γM)T
Using equation (4), a likelihood function is obtained
Figure BDA0003223100000000041
wherein ,
Figure BDA0003223100000000042
m is the number of parameters to be determined.
In a similar manner to that described above,
Figure BDA0003223100000000043
wherein α ═ diag (α)i);
The posterior distribution p (w | t, X, α, β, γ) of the obtained weight parameters is also a normal distribution, where N (w | m, Σ) is
m=(A+βΦTΦ)-1(Aγ+βΦTt) (8)
Σ=(A+βΦTΦ)-1 (9)
wherein
Figure BDA0003223100000000044
Further, the step 2 is specifically that,
p(t|X,α,β,γ)=∫p(t|X,w,β)p(w|α,γ)dw (10)
using the formula (6) and the formula (7), it is possible to obtain
Figure BDA0003223100000000045
wherein
Figure BDA0003223100000000051
Order to
Figure BDA0003223100000000052
To obtain w ═ a + β ΦTΦ)-1(βΦTt+Aγ)=m,
Figure BDA0003223100000000053
Can obtain the product
Figure BDA0003223100000000054
wherein
Figure BDA0003223100000000055
Using the formula (11) and the formula (12), it is possible to obtain
Figure BDA0003223100000000056
Wherein m ═ a + β ΦTΦ)-1(βΦTt+Aγ),
Figure BDA0003223100000000057
X X=(x1,x2,…,xN)。
Further, the step 3 is specifically that,
logarithm of the formula (13) is obtained
Figure BDA0003223100000000058
Using equations (9), (14) and
Figure BDA0003223100000000059
can obtain the product
Figure BDA00032231000000000510
Because of the fact that
Figure BDA0003223100000000061
By using
Figure BDA0003223100000000062
And equation (15), can be derived
Figure BDA0003223100000000063
From the formula (16), it can be found
Figure BDA0003223100000000064
wherein ΣiiIs the ith element of the main diagonal of the a posteriori covariance Σ;
from the formula (12), it can be found
Figure BDA0003223100000000065
wherein miIs the ith component of the posterior mean m;
from the formula (17) and the formula (18), it can be obtained
Figure BDA0003223100000000066
From (19) to
Figure BDA0003223100000000067
Thereby obtaining
Figure BDA0003223100000000068
wherein λi=1-αiΣii
Definition of Σ according to equation (9) and
Figure BDA0003223100000000069
can obtain the product
Figure BDA00032231000000000610
According to
Figure BDA00032231000000000611
And Tr (xy)T)=xTy, is obtained
Figure BDA00032231000000000612
Because (A + beta. phi)TΦ)(A+βΦTΦ)-1=IMIs obtained by
ΦTΦΣ=β-1(IM-AΣ) (23)
From the formula (22) and the formula (23), it is possible to obtain
Figure BDA0003223100000000071
From the formula (12), it can be obtained
Figure BDA0003223100000000072
From the formula (24) and the formula (25), it can be obtained
Figure BDA0003223100000000073
Thereby obtaining
Figure BDA0003223100000000074
According to equation (12), the derivative of γ can be obtained
Figure BDA0003223100000000075
Order to
Figure BDA0003223100000000076
Can get gamma as m, so
γi=mi (27)
wherein γiIs the i-th component of γ, miIs the ith component of m.
The invention has the beneficial effects that:
the method adopts a more general weight parameter prior form instead of the traditional normal distribution that the mean value of each weight parameter is zero, the parameters have a larger value range, and then the image data maximizes an evidence function containing the hyper-parameters according to the methods of matrix calculus, matrix algebra and optimization, thereby being beneficial to improving the resolution of the image data.
Drawings
FIG. 1 is a schematic flow diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An image processing evidence function estimation method based on a correlation vector machine, the evidence function estimation method comprises the following steps:
step 1: the data in the image is proved to be normal distribution by using the mean value and covariance in the normal distribution according to the corrected weight parameter prior form;
step 2: integrating the weight parameters of the data in the image according to a multivariate Taylor formula and a product of a likelihood function and the prior distribution of the weight to obtain a specific expression of an evidence function, namely an edge likelihood function;
and step 3: and (3) based on the edge likelihood function of the data in the image in the step (2), maximizing the evidence function containing the hyper-parameters by utilizing a matrix calculus, a matrix algebra and an optimization method, thereby obtaining the optimization iterative algorithm of each hyper-parameter of the image.
Further, the step 1 is specifically when
Figure BDA0003223100000000081
Where x is a scalar quantity, A is an nxn invertible symmetric matrix, Tr (x) is a trace of the matrix,
Figure BDA0003223100000000082
Tr(xyT)=xTy, wherein Tr (·) is a trace of the matrix, x is a vector, and y is a vector; operator
Figure BDA0003223100000000083
Is defined as
Figure BDA0003223100000000084
And is
Figure BDA0003223100000000085
wherein k∈N;
let h be (h)1,h2)TAnd
Figure BDA0003223100000000086
wherein h1Is the first component of the vector h, h2Is the second component of the vector h, x1Is the first component of the vector x, x2As a second component of the vector x,
can obtain the product
Figure BDA0003223100000000087
And
Figure BDA0003223100000000088
when operator
Figure BDA0003223100000000089
Acting on x to obtain
Figure BDA00032231000000000810
Figure BDA00032231000000000811
Then the operator of the basis
Figure BDA00032231000000000812
The formula x, which acts on x (this x is removed), can be derived
Figure BDA0003223100000000091
Figure BDA0003223100000000092
wherein h=(h1,h2)TAnd
Figure BDA0003223100000000093
by using
Figure BDA0003223100000000094
Can obtain the product
Figure BDA0003223100000000095
Figure BDA0003223100000000096
Figure BDA0003223100000000097
If f is defined as f (w) → RTAw+wTb + c; wherein A is an nxn reversible symmetric matrix, b and w are n-dimensional column vectors, and c is a scalar; taylor of f (w)
Figure BDA0003223100000000098
Is of Taylor expansion type
Figure BDA0003223100000000099
The (i, j) th element of H is composed of
Figure BDA00032231000000000910
Defining;
the general form of a linear regression model in machine learning is
Figure BDA00032231000000000911
wherein φi(x) As a non-linear basis function of the input variable, w0Is a deviation parameter, x is an image data vector;
definition of phi0(x) 1, so that formula (1) can be rewritten as
Figure BDA00032231000000000912
wherein w=(w0,…,wM-1)TAnd phi (x) to (phi)0(x),…,φM-1(x))T
The objective function is a deterministic function y (x, w) with additive Gaussian noise, i.e.
t=y(x,w)+ε (3)
Where ε is the mean of 0 and the accuracy is the normal random variable of β, thereby obtaining
p(t|x,w,β)=N(t|y(x,w),β-1) (4)。
Further, the weight parameter in step 1 is in the form of a priori
Figure BDA0003223100000000101
Where α is the precision (inverse of variance) vector, α ═ α1,…,αM)TAnd γ is the mean vector, γ ═ γ1,…,γM)T
Using equation (4), a likelihood function is obtained
Figure BDA0003223100000000102
wherein ,
Figure BDA0003223100000000103
m is the number of parameters to be determined.
In a similar manner to that described above,
Figure BDA0003223100000000104
wherein α ═ diag (α)i);
The posterior distribution p (w | t, X, α, β, γ) of the obtained weight parameters is also a normal distribution, where N (w | m, Σ) is
m=(A+βΦTΦ)-1(Aγ+βΦTt) (8)
Σ=(A+βΦTΦ)-1 (9)
wherein
Figure BDA0003223100000000111
Before proof, look at the following:
normal distribution
Figure BDA0003223100000000112
Taking the negative index of normal distribution as
Figure BDA0003223100000000113
Order to
Figure BDA0003223100000000114
X ═ μ can be obtained, which implies that the stagnation point of f (x) is the mean of the normal distribution, while at the same time
Figure BDA0003223100000000115
The second order gradient of (f), (x) is the inverse of the covariance.
It is demonstrated below that p (w | t, X, α, β, γ) is a normal distribution;
from the formula (6) and the formula (7), the negative exponent of the product of p (t | X, w, β) p (w | α, γ) is obtained as
Figure BDA0003223100000000116
Order to
Figure BDA0003223100000000117
Therefore, the temperature of the molten metal is controlled,
m=w=(A+βΦTΦ)-1(Aγ+βΦTt),
and because of
Figure BDA0003223100000000118
So that ∑ is (a + β Φ)TΦ)-1Obtain the syndrome.
Further, the step 2 is specifically that,
p(t|X,α,β,γ)=∫p(t|X,w,β)p(w|α,γ)dw (10)
using the formula (6) and the formula (7), it is possible to obtain
Figure BDA0003223100000000119
wherein
Figure BDA00032231000000001110
Figure BDA0003223100000000121
Order to
Figure BDA0003223100000000122
To obtain w ═ a + β ΦTΦ)-1(βΦTt+Aγ)=m,
Figure BDA0003223100000000123
Can obtain the product
Figure BDA0003223100000000124
wherein
Figure BDA0003223100000000125
Using the formula (11) and the formula (12), it is possible to obtain
Figure BDA0003223100000000126
Wherein m ═ a + β ΦTΦ)-1(βΦTt+Aγ),
Figure BDA0003223100000000127
X=(x1,x2,…,xN)。
Further, the step 3 is specifically.
Logarithm of the formula (13) is obtained
Figure BDA0003223100000000128
Using equations (9), (14) and
Figure BDA0003223100000000129
can obtain the product
Figure BDA00032231000000001210
Because of the fact that
Figure BDA00032231000000001211
By using
Figure BDA0003223100000000131
And equation (15), can be derived
Figure BDA0003223100000000132
From the formula (16), it can be found
Figure BDA0003223100000000133
wherein ΣiiIs the ith element of the main diagonal of the a posteriori covariance Σ;
from the formula (12), it can be found
Figure BDA0003223100000000134
wherein miIs the ith component of the posterior mean m;
from the formula (17) and the formula (18), it can be obtained
Figure BDA0003223100000000135
From (19) to
Figure BDA0003223100000000136
Thereby obtaining
Figure BDA0003223100000000137
wherein λi=1-αiΣii,;
Definition of Σ according to equation (9) and
Figure BDA0003223100000000138
can obtain the product
Figure BDA0003223100000000139
According to
Figure BDA00032231000000001310
And Tr (xy)T)=xTy, is obtained
Figure BDA00032231000000001311
Because (A + beta. phi)TΦ)(A+βΦTΦ)-1=IMIs obtained by
ΦTΦΣ=β-1(IM-AΣ) (23)
From the formula (22) and the formula (23), it is possible to obtain
Figure BDA0003223100000000141
From the formula (12), it can be obtained
Figure BDA0003223100000000142
From the formula (24) and the formula (25), it can be obtained
Figure BDA0003223100000000143
Thereby obtaining
Figure BDA0003223100000000144
According to equation (12), the derivative of γ can be obtained
Figure BDA0003223100000000145
Order to
Figure BDA0003223100000000146
Can get gamma as m, so
γi=mi (27)
wherein γiIs the i-th component of γ, miIs the ith component of m.

Claims (5)

1. An image processing evidence function estimation method based on a correlation vector machine is characterized by comprising the following steps:
step 1: the data in the image is proved to be normal distribution by using the mean value and covariance in the normal distribution according to the corrected weight parameter prior form;
step 2: integrating the weight parameters of the data in the image according to a multivariate Taylor formula and a product of a likelihood function and the prior distribution of the weight to obtain a specific expression of an evidence function, namely an edge likelihood function;
and step 3: and (3) based on the edge likelihood function of the data in the image in the step (2), maximizing the evidence function containing the hyper-parameters by utilizing a matrix calculus, a matrix algebra and an optimization method, thereby obtaining the optimization iterative algorithm of each hyper-parameter of the image.
2. The method according to claim 1, wherein the step 1 is specifically when the method comprises
Figure FDA0003223099990000011
Wherein x is a scalar, A is an nxn invertible symmetric matrix, Tr (-) is a trace of the matrix,
Figure FDA0003223099990000012
Wherein Tr (·) is the trace of the matrix, and x is the vector and is the vector; operator
Figure FDA00032230999900000112
Is defined as
Figure FDA0003223099990000013
And is
Figure FDA0003223099990000014
wherein k∈N;
let h be (h)1,h2)TAnd
Figure FDA0003223099990000015
wherein h1Is the first component of the vector h, h2Is the second component of the vector h, x1Is the first component of the vector x, x2As the second component of the vector x, one obtains
Figure FDA0003223099990000016
And
Figure FDA0003223099990000017
when operator
Figure FDA0003223099990000018
Acting on x to obtain
Figure FDA0003223099990000019
Figure FDA00032230999900000110
Then the operator of the basis
Figure FDA00032230999900000111
Formula for the action of x, can be derived
Figure FDA0003223099990000021
Figure FDA0003223099990000022
wherein h=(h1,h2)TAnd
Figure FDA0003223099990000023
by using
Figure FDA0003223099990000024
Can obtain the product
Figure FDA0003223099990000025
Figure FDA0003223099990000026
Figure FDA0003223099990000027
If f is defined as f (w) → RTAw+wTb + c; wherein A is an nxn reversible symmetric matrix, b and w are n-dimensional column vectors, and c is a scalar; taylor of f (w)
Figure FDA0003223099990000028
Is of Taylor expansion type
Figure FDA0003223099990000029
The (i, j) th element of H is composed of
Figure FDA00032230999900000210
Defining;
the general form of a linear regression model in machine learning is
Figure FDA00032230999900000211
wherein φi(x) As a non-linear basis function of the input variable, w0Is a deviation parameter, x is an image data vector;
definition of phi0(x) 1, so that formula (1) can be rewritten as
Figure FDA00032230999900000212
wherein w=(w0,…,wM-1)TAnd phi (x) to (phi)0(x),…,φM-1(x))T
The objective function is a deterministic function y (x, w) with additive Gaussian noise, i.e.
t=y(x,w)+ε (3)
Where ε is the mean of 0 and the accuracy is the normal random variable of β, thereby obtaining
p(t|x,w,β)=N(t|y(x,w),β-1) (4)。
3. The method for estimating an image processing evidence function based on a relevance vector machine according to claim 1, wherein the weight parameter prior in the step 1 is in the form of
Figure FDA0003223099990000031
Where α is the precision vector, α ═ α1,…,αM)TAnd γ is the mean vector, γ ═ γ1,…,γM)T
Using equation (4), a likelihood function is obtained
Figure FDA0003223099990000032
wherein ,
Figure FDA0003223099990000033
m is the number of parameters to be determined.
In a similar manner to that described above,
Figure FDA0003223099990000034
wherein α ═ diag (α)i);
The posterior distribution p (w | t, X, α, β, γ) of the obtained weight parameters is also a normal distribution, where N (w | m, Σ) is
m=(A+βΦTΦ)-1(Aγ+βΦTt) (8)
Σ=(A+βΦTΦ)-1 (9)
wherein
Figure FDA0003223099990000041
4. The method for estimating an image processing evidence function based on a correlation vector machine according to claim 1, wherein the step 2 is specifically,
p(t|X,α,β,γ)=∫p(t|X,w,β)p(w|α,γ)dw (10)
using the formula (6) and the formula (7), it is possible to obtain
Figure FDA0003223099990000042
wherein
Figure FDA0003223099990000043
Order to
Figure FDA0003223099990000044
To obtain
Figure FDA0003223099990000045
Can obtain the product
Figure FDA0003223099990000046
wherein
Figure FDA0003223099990000047
Using the formula (11) and the formula (12), it is possible to obtain
Figure FDA0003223099990000048
Figure FDA0003223099990000051
Wherein m ═ a + β ΦTΦ)-1(βΦTt+Aγ),
Figure FDA0003223099990000052
X=(x1,x2,…,xN)。
5. The method for estimating an image processing evidence function based on a correlation vector machine according to claim 4, wherein the step 3 is specifically,
logarithm of the formula (13) is obtained
Figure FDA0003223099990000053
Using equations (9), (14) and
Figure FDA0003223099990000054
can obtain the product
Figure FDA0003223099990000055
Because of the fact that
Figure FDA0003223099990000056
By using
Figure FDA0003223099990000057
And equation (15), can be derived
Figure FDA0003223099990000058
From the formula (16), it can be found
Figure FDA0003223099990000059
wherein ΣiiIs the ith element of the main diagonal of the a posteriori covariance Σ;
from the formula (12), it can be found
Figure FDA00032230999900000510
wherein miIs the ith component of the posterior mean m;
from the formula (17) and the formula (18), it can be obtained
Figure FDA00032230999900000511
From (19) to
Figure FDA0003223099990000061
Thereby obtaining
Figure FDA0003223099990000062
wherein λi=1-αiΣii
Definition of Σ according to equation (9) and
Figure FDA0003223099990000063
can obtain the product
Figure FDA0003223099990000064
According to
Figure FDA0003223099990000065
And Tr (xy)T)=xTy, is obtained
Figure FDA0003223099990000066
Because (A + beta. phi)TΦ)(A+βΦTΦ)-1=IMIs obtained by
ΦTΦΣ=β-1(IM-AΣ) (23)
From the formula (22) and the formula (23), it is possible to obtain
Figure FDA0003223099990000067
From the formula (12), it can be obtained
Figure FDA0003223099990000068
From the formula (24) and the formula (25), it can be obtained
Figure FDA0003223099990000069
Thereby obtaining
Figure FDA00032230999900000610
According to equation (12), the derivative of γ can be obtained
Figure FDA00032230999900000611
Order to
Figure FDA00032230999900000612
Can get gamma as m, so
γi=mi (27)
wherein γiIs the i-th component of γ, miIs the ith component of m.
CN202110963746.2A 2021-08-20 2021-08-20 Image processing evidence function estimation method based on correlation vector machine Active CN113779502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110963746.2A CN113779502B (en) 2021-08-20 2021-08-20 Image processing evidence function estimation method based on correlation vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110963746.2A CN113779502B (en) 2021-08-20 2021-08-20 Image processing evidence function estimation method based on correlation vector machine

Publications (2)

Publication Number Publication Date
CN113779502A true CN113779502A (en) 2021-12-10
CN113779502B CN113779502B (en) 2023-08-29

Family

ID=78838587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110963746.2A Active CN113779502B (en) 2021-08-20 2021-08-20 Image processing evidence function estimation method based on correlation vector machine

Country Status (1)

Country Link
CN (1) CN113779502B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100070435A1 (en) * 2008-09-12 2010-03-18 Microsoft Corporation Computationally Efficient Probabilistic Linear Regression
CN102254193A (en) * 2011-07-16 2011-11-23 西安电子科技大学 Relevance vector machine-based multi-class data classifying method
CN103258213A (en) * 2013-04-22 2013-08-21 中国石油大学(华东) Vehicle model dynamic identification method used in intelligent transportation system
CN104732215A (en) * 2015-03-25 2015-06-24 广西大学 Remote-sensing image coastline extracting method based on information vector machine
CN106709918A (en) * 2017-01-20 2017-05-24 成都信息工程大学 Method for segmenting images of multi-element student t distribution mixed model based on spatial smoothing
CN108197435A (en) * 2018-01-29 2018-06-22 绥化学院 Localization method between a kind of multiple characters multi-region for containing error based on marker site genotype
CN108228535A (en) * 2018-01-02 2018-06-29 佛山科学技术学院 A kind of optimal weighting parameter evaluation method of unequal precision measurement data fusion
CN111914865A (en) * 2019-05-08 2020-11-10 天津科技大学 Probability main component analysis method based on random core
CN112053307A (en) * 2020-08-14 2020-12-08 河海大学常州校区 X-ray image linear reconstruction method
US10867171B1 (en) * 2018-10-22 2020-12-15 Omniscience Corporation Systems and methods for machine learning based content extraction from document images

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100070435A1 (en) * 2008-09-12 2010-03-18 Microsoft Corporation Computationally Efficient Probabilistic Linear Regression
CN102254193A (en) * 2011-07-16 2011-11-23 西安电子科技大学 Relevance vector machine-based multi-class data classifying method
CN103258213A (en) * 2013-04-22 2013-08-21 中国石油大学(华东) Vehicle model dynamic identification method used in intelligent transportation system
CN104732215A (en) * 2015-03-25 2015-06-24 广西大学 Remote-sensing image coastline extracting method based on information vector machine
CN106709918A (en) * 2017-01-20 2017-05-24 成都信息工程大学 Method for segmenting images of multi-element student t distribution mixed model based on spatial smoothing
CN108228535A (en) * 2018-01-02 2018-06-29 佛山科学技术学院 A kind of optimal weighting parameter evaluation method of unequal precision measurement data fusion
CN108197435A (en) * 2018-01-29 2018-06-22 绥化学院 Localization method between a kind of multiple characters multi-region for containing error based on marker site genotype
US10867171B1 (en) * 2018-10-22 2020-12-15 Omniscience Corporation Systems and methods for machine learning based content extraction from document images
CN111914865A (en) * 2019-05-08 2020-11-10 天津科技大学 Probability main component analysis method based on random core
CN112053307A (en) * 2020-08-14 2020-12-08 河海大学常州校区 X-ray image linear reconstruction method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CARL EDWARD RASMUSSEN等: "Healing the relevance vector machine by augmentation", PROCEEDINGS OF THE 22ND INTERNATIONAL CONFERENECE ON MACHINE LEARNING, pages 689 *
DAWEI ZOU等: "A Logical Framework of the Evidence Function Approximation Associated with Relevance Vector Machine", MATHEMATICAL PROBLEMS IN ENGINEERING, vol. 2020, pages 1 *
俞炯奇;梁国钱;: "地基沉降双曲线拟合的Bayes估计", 水力发电, vol. 34, no. 02, pages 26 *
张仕山等: "集成DS证据理论和模糊集的建筑物检测方法", 遥感信息, vol. 35, no. 5, pages 93 *
李鑫等: "基于相关向量机算法的研究与应用综述", 信息工程大学学报, vol. 21, no. 4, pages 433 *

Also Published As

Publication number Publication date
CN113779502B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
Zhang et al. Robust low-rank kernel multi-view subspace clustering based on the schatten p-norm and correntropy
Hajivassiliou et al. The method of simulated scores for the estimation of LDV models
Yan et al. An adaptive surrogate modeling based on deep neural networks for large-scale Bayesian inverse problems
Poyiadjis et al. Particle approximations of the score and observed information matrix in state space models with application to parameter estimation
Zheng et al. Efficient variational Bayesian approximation method based on subspace optimization
JPWO2005119507A1 (en) High-speed and high-precision singular value decomposition method, program and apparatus for matrix
Smith Implicit copulas: An overview
Mollapourasl et al. RBF-PU method for pricing options under the jump–diffusion model with local volatility
Yang et al. On MCMC sampling in self-exciting integer-valued threshold time series models
Liu et al. Efficient low-order system identification from low-quality step response data with rank-constrained optimization
CN109657693B (en) Classification method based on correlation entropy and transfer learning
Chen et al. Kalman Filtering Under Information Theoretic Criteria
CN107451684A (en) Stock market's probability forecasting method based on core stochastic approximation
Labsir et al. An intrinsic Bayesian bound for estimators on the Lie groups SO (3) and SE (3)
CN113779502A (en) Image processing evidence function estimation method based on correlation vector machine
Hofert et al. Estimators for Archimedean copulas in high dimensions
Yan et al. An acceleration strategy for randomize-then-optimize sampling via deep neural networks
CN114756535A (en) Bayes tensor completion algorithm based on complex noise
Guan et al. A surface defect detection method of the magnesium alloy sheet based on deformable convolution neural network
Ait-El-Fquih et al. Parallel-and cyclic-iterative variational Bayes for fast Kalman filtering in large-dimensions
Sha et al. Adaptive restoration and reconstruction of incomplete flow fields based on unsupervised learning
Martinez et al. A note on the likelihood and moments of the skew-normal distribution
CN111882441A (en) User prediction interpretation Treeshap method based on financial product recommendation scene
Chung et al. The variable projected augmented Lagrangian method
Shen et al. A partial PPa S-ADMM for multi-block for separable convex optimization with linear constraints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant