CN113779502A - Image processing evidence function estimation method based on correlation vector machine - Google Patents
Image processing evidence function estimation method based on correlation vector machine Download PDFInfo
- Publication number
- CN113779502A CN113779502A CN202110963746.2A CN202110963746A CN113779502A CN 113779502 A CN113779502 A CN 113779502A CN 202110963746 A CN202110963746 A CN 202110963746A CN 113779502 A CN113779502 A CN 113779502A
- Authority
- CN
- China
- Prior art keywords
- formula
- function
- vector
- image
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000013598 vector Substances 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000009826 distribution Methods 0.000 claims abstract description 31
- 239000011159 matrix material Substances 0.000 claims abstract description 22
- 238000005457 optimization Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 46
- 239000000654 additive Substances 0.000 claims description 3
- 230000000996 additive effect Effects 0.000 claims description 3
- 238000012417 linear regression Methods 0.000 claims description 3
- 238000010801 machine learning Methods 0.000 claims description 3
- 230000002441 reversible effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 208000011580 syndromic disease Diseases 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/17—Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Operations Research (AREA)
- Computing Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image processing evidence function estimation method based on a correlation vector machine. Step 1: the data in the image is proved to be normal distribution by using the mean value and covariance in the normal distribution according to the corrected weight parameter prior form; step 2: integrating the weight parameters of the data in the image according to a multivariate Taylor formula and a product of a likelihood function and the prior distribution of the weight to obtain a specific expression of an evidence function, namely an edge likelihood function; and step 3: and (3) based on the edge likelihood function of the data in the image in the step (2), maximizing the evidence function containing the hyper-parameters by utilizing a matrix calculus, a matrix algebra and an optimization method, thereby obtaining the optimization iterative algorithm of each hyper-parameter of the image. The method is used for solving the problems that complex integral is faced and calculation is difficult in the process of integrating the weight parameter by the product of the likelihood function and the prior distribution of the weight in image processing to obtain the evidence function.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image processing evidence function estimation method based on a correlation vector machine.
Background
In the estimation process of an evidence function related to a correlation vector machine in the field of image processing, the posterior distribution needs to be proved to be normal distribution, and the mean value and covariance of the normal distribution are calculated. Meanwhile, in the evidence function principle related to a correlation vector machine in the field of image processing, the prior distribution of weight parameters is the product of some normal distributions with zero mean value, which lacks generality.
In the process of integrating the weight parameter by the product of the likelihood function and the prior distribution of the weight to obtain the evidence function, complex integration has to be faced, and the calculation is difficult. How to find a new method and a logical framework for image processing to more simply and logically find the correlation integral of image processing and more effectively maximize the evidence function is relatively few researches in the aspect at present.
Disclosure of Invention
The invention provides an image processing evidence function estimation method based on a correlation vector machine, which is used for solving the problems that complex integrals have to be faced and the calculation is difficult in the process of integrating the weight parameters by the product of likelihood functions and the prior distribution of weights in image processing to obtain an evidence function.
The invention is realized by the following technical scheme:
an image processing evidence function estimation method based on a correlation vector machine, the evidence function estimation method comprises the following steps:
step 1: the data in the image is proved to be normal distribution by using the mean value and covariance in the normal distribution according to the corrected weight parameter prior form;
step 2: integrating the weight parameters of the data in the image according to a multivariate Taylor formula and a product of a likelihood function and the prior distribution of the weight to obtain a specific expression of an evidence function, namely an edge likelihood function;
and step 3: and (3) based on the edge likelihood function of the data in the image in the step (2), maximizing the evidence function containing the hyper-parameters by utilizing a matrix calculus, a matrix algebra and an optimization method, thereby obtaining the optimization iterative algorithm of each hyper-parameter of the image.
Further, the step 1 is specifically whenWherein x is a scalar, A is an nxn invertible symmetric matrix, Tr (-) is a trace of the matrix,Tr(xyT)=xTy, wherein Tr (·) is a trace of the matrix, x is a vector, and y is a vector; operatorIs defined asAnd is wherein k∈N;
let h be (h)1,h2)TAnd wherein h1Is the first component of the vector h, h2Is the second component of the vector h, x1Is the first component of the vector x, x2As a second component of the vector x,
If f is defined as f (w) → RTAw+wTb + c; wherein A is an nxn reversible symmetric matrix, b and w are n-dimensional column vectors, and c is a scalar; taylor of f (w)Is of Taylor expansion type
the general form of a linear regression model in machine learning is
wherein φi(x) As a non-linear basis function of the input variable, w0Is a deviation parameter, x is an image data vector;
definition of phi0(x) 1, so that formula (1) can be rewritten as
wherein w=(w0,…,wM-1)TAnd phi (x) to (phi)0(x),…,φM-1(x))T;
The objective function is a deterministic function y (x, w) with additive Gaussian noise, i.e.
t=y(x,w)+ε (3)
Where ε is the mean of 0 and the accuracy is the normal random variable of β, thereby obtaining
p(t|x,w,β)=N(t|y(x,w),β-1) (4)。
Further, the weight parameter in step 1 is in the form of a priori
Where α is the precision (inverse of variance) vector, α ═ α1,…,αM)TAnd γ is the mean vector, γ ═ γ1,…,γM)T。
Using equation (4), a likelihood function is obtained
In a similar manner to that described above,
wherein α ═ diag (α)i);
The posterior distribution p (w | t, X, α, β, γ) of the obtained weight parameters is also a normal distribution, where N (w | m, Σ) is
m=(A+βΦTΦ)-1(Aγ+βΦTt) (8)
Σ=(A+βΦTΦ)-1 (9)
Further, the step 2 is specifically that,
p(t|X,α,β,γ)=∫p(t|X,w,β)p(w|α,γ)dw (10)
using the formula (6) and the formula (7), it is possible to obtain
wherein
wherein
Using the formula (11) and the formula (12), it is possible to obtain
Further, the step 3 is specifically that,
logarithm of the formula (13) is obtained
From the formula (16), it can be found
wherein ΣiiIs the ith element of the main diagonal of the a posteriori covariance Σ;
from the formula (12), it can be found
wherein miIs the ith component of the posterior mean m;
from the formula (17) and the formula (18), it can be obtained
wherein λi=1-αiΣii;
Because (A + beta. phi)TΦ)(A+βΦTΦ)-1=IMIs obtained by
ΦTΦΣ=β-1(IM-AΣ) (23)
From the formula (22) and the formula (23), it is possible to obtain
From the formula (12), it can be obtained
From the formula (24) and the formula (25), it can be obtained
Thereby obtaining
γi=mi (27)
wherein γiIs the i-th component of γ, miIs the ith component of m.
The invention has the beneficial effects that:
the method adopts a more general weight parameter prior form instead of the traditional normal distribution that the mean value of each weight parameter is zero, the parameters have a larger value range, and then the image data maximizes an evidence function containing the hyper-parameters according to the methods of matrix calculus, matrix algebra and optimization, thereby being beneficial to improving the resolution of the image data.
Drawings
FIG. 1 is a schematic flow diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An image processing evidence function estimation method based on a correlation vector machine, the evidence function estimation method comprises the following steps:
step 1: the data in the image is proved to be normal distribution by using the mean value and covariance in the normal distribution according to the corrected weight parameter prior form;
step 2: integrating the weight parameters of the data in the image according to a multivariate Taylor formula and a product of a likelihood function and the prior distribution of the weight to obtain a specific expression of an evidence function, namely an edge likelihood function;
and step 3: and (3) based on the edge likelihood function of the data in the image in the step (2), maximizing the evidence function containing the hyper-parameters by utilizing a matrix calculus, a matrix algebra and an optimization method, thereby obtaining the optimization iterative algorithm of each hyper-parameter of the image.
Further, the step 1 is specifically whenWhere x is a scalar quantity, A is an nxn invertible symmetric matrix, Tr (x) is a trace of the matrix,Tr(xyT)=xTy, wherein Tr (·) is a trace of the matrix, x is a vector, and y is a vector; operatorIs defined asAnd is wherein k∈N;
let h be (h)1,h2)TAnd wherein h1Is the first component of the vector h, h2Is the second component of the vector h, x1Is the first component of the vector x, x2As a second component of the vector x,
If f is defined as f (w) → RTAw+wTb + c; wherein A is an nxn reversible symmetric matrix, b and w are n-dimensional column vectors, and c is a scalar; taylor of f (w)Is of Taylor expansion type
the general form of a linear regression model in machine learning is
wherein φi(x) As a non-linear basis function of the input variable, w0Is a deviation parameter, x is an image data vector;
definition of phi0(x) 1, so that formula (1) can be rewritten as
wherein w=(w0,…,wM-1)TAnd phi (x) to (phi)0(x),…,φM-1(x))T;
The objective function is a deterministic function y (x, w) with additive Gaussian noise, i.e.
t=y(x,w)+ε (3)
Where ε is the mean of 0 and the accuracy is the normal random variable of β, thereby obtaining
p(t|x,w,β)=N(t|y(x,w),β-1) (4)。
Further, the weight parameter in step 1 is in the form of a priori
Where α is the precision (inverse of variance) vector, α ═ α1,…,αM)TAnd γ is the mean vector, γ ═ γ1,…,γM)T。
Using equation (4), a likelihood function is obtained
In a similar manner to that described above,
wherein α ═ diag (α)i);
The posterior distribution p (w | t, X, α, β, γ) of the obtained weight parameters is also a normal distribution, where N (w | m, Σ) is
m=(A+βΦTΦ)-1(Aγ+βΦTt) (8)
Σ=(A+βΦTΦ)-1 (9)
Before proof, look at the following:
Order toX ═ μ can be obtained, which implies that the stagnation point of f (x) is the mean of the normal distribution, while at the same timeThe second order gradient of (f), (x) is the inverse of the covariance.
It is demonstrated below that p (w | t, X, α, β, γ) is a normal distribution;
from the formula (6) and the formula (7), the negative exponent of the product of p (t | X, w, β) p (w | α, γ) is obtained as
m=w=(A+βΦTΦ)-1(Aγ+βΦTt),
Further, the step 2 is specifically that,
p(t|X,α,β,γ)=∫p(t|X,w,β)p(w|α,γ)dw (10)
using the formula (6) and the formula (7), it is possible to obtain
wherein
wherein
Using the formula (11) and the formula (12), it is possible to obtain
Further, the step 3 is specifically.
Logarithm of the formula (13) is obtained
From the formula (16), it can be found
wherein ΣiiIs the ith element of the main diagonal of the a posteriori covariance Σ;
from the formula (12), it can be found
wherein miIs the ith component of the posterior mean m;
from the formula (17) and the formula (18), it can be obtained
wherein λi=1-αiΣii,;
Because (A + beta. phi)TΦ)(A+βΦTΦ)-1=IMIs obtained by
ΦTΦΣ=β-1(IM-AΣ) (23)
From the formula (22) and the formula (23), it is possible to obtain
From the formula (12), it can be obtained
From the formula (24) and the formula (25), it can be obtained
Thereby obtaining
γi=mi (27)
wherein γiIs the i-th component of γ, miIs the ith component of m.
Claims (5)
1. An image processing evidence function estimation method based on a correlation vector machine is characterized by comprising the following steps:
step 1: the data in the image is proved to be normal distribution by using the mean value and covariance in the normal distribution according to the corrected weight parameter prior form;
step 2: integrating the weight parameters of the data in the image according to a multivariate Taylor formula and a product of a likelihood function and the prior distribution of the weight to obtain a specific expression of an evidence function, namely an edge likelihood function;
and step 3: and (3) based on the edge likelihood function of the data in the image in the step (2), maximizing the evidence function containing the hyper-parameters by utilizing a matrix calculus, a matrix algebra and an optimization method, thereby obtaining the optimization iterative algorithm of each hyper-parameter of the image.
2. The method according to claim 1, wherein the step 1 is specifically when the method comprisesWherein x is a scalar, A is an nxn invertible symmetric matrix, Tr (-) is a trace of the matrix,Wherein Tr (·) is the trace of the matrix, and x is the vector and is the vector; operatorIs defined asAnd is wherein k∈N;
let h be (h)1,h2)TAnd wherein h1Is the first component of the vector h, h2Is the second component of the vector h, x1Is the first component of the vector x, x2As the second component of the vector x, one obtainsAnd
If f is defined as f (w) → RTAw+wTb + c; wherein A is an nxn reversible symmetric matrix, b and w are n-dimensional column vectors, and c is a scalar; taylor of f (w)Is of Taylor expansion type
the general form of a linear regression model in machine learning is
wherein φi(x) As a non-linear basis function of the input variable, w0Is a deviation parameter, x is an image data vector;
definition of phi0(x) 1, so that formula (1) can be rewritten as
wherein w=(w0,…,wM-1)TAnd phi (x) to (phi)0(x),…,φM-1(x))T;
The objective function is a deterministic function y (x, w) with additive Gaussian noise, i.e.
t=y(x,w)+ε (3)
Where ε is the mean of 0 and the accuracy is the normal random variable of β, thereby obtaining
p(t|x,w,β)=N(t|y(x,w),β-1) (4)。
3. The method for estimating an image processing evidence function based on a relevance vector machine according to claim 1, wherein the weight parameter prior in the step 1 is in the form of
Where α is the precision vector, α ═ α1,…,αM)TAnd γ is the mean vector, γ ═ γ1,…,γM)T。
Using equation (4), a likelihood function is obtained
In a similar manner to that described above,
wherein α ═ diag (α)i);
The posterior distribution p (w | t, X, α, β, γ) of the obtained weight parameters is also a normal distribution, where N (w | m, Σ) is
m=(A+βΦTΦ)-1(Aγ+βΦTt) (8)
Σ=(A+βΦTΦ)-1 (9)
4. The method for estimating an image processing evidence function based on a correlation vector machine according to claim 1, wherein the step 2 is specifically,
p(t|X,α,β,γ)=∫p(t|X,w,β)p(w|α,γ)dw (10)
using the formula (6) and the formula (7), it is possible to obtain
wherein
wherein
Using the formula (11) and the formula (12), it is possible to obtain
5. The method for estimating an image processing evidence function based on a correlation vector machine according to claim 4, wherein the step 3 is specifically,
logarithm of the formula (13) is obtained
From the formula (16), it can be found
wherein ΣiiIs the ith element of the main diagonal of the a posteriori covariance Σ;
from the formula (12), it can be found
wherein miIs the ith component of the posterior mean m;
from the formula (17) and the formula (18), it can be obtained
wherein λi=1-αiΣii;
Because (A + beta. phi)TΦ)(A+βΦTΦ)-1=IMIs obtained by
ΦTΦΣ=β-1(IM-AΣ) (23)
From the formula (22) and the formula (23), it is possible to obtain
From the formula (12), it can be obtained
From the formula (24) and the formula (25), it can be obtained
Thereby obtaining
γi=mi (27)
wherein γiIs the i-th component of γ, miIs the ith component of m.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110963746.2A CN113779502B (en) | 2021-08-20 | 2021-08-20 | Image processing evidence function estimation method based on correlation vector machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110963746.2A CN113779502B (en) | 2021-08-20 | 2021-08-20 | Image processing evidence function estimation method based on correlation vector machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113779502A true CN113779502A (en) | 2021-12-10 |
CN113779502B CN113779502B (en) | 2023-08-29 |
Family
ID=78838587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110963746.2A Active CN113779502B (en) | 2021-08-20 | 2021-08-20 | Image processing evidence function estimation method based on correlation vector machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113779502B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100070435A1 (en) * | 2008-09-12 | 2010-03-18 | Microsoft Corporation | Computationally Efficient Probabilistic Linear Regression |
CN102254193A (en) * | 2011-07-16 | 2011-11-23 | 西安电子科技大学 | Relevance vector machine-based multi-class data classifying method |
CN103258213A (en) * | 2013-04-22 | 2013-08-21 | 中国石油大学(华东) | Vehicle model dynamic identification method used in intelligent transportation system |
CN104732215A (en) * | 2015-03-25 | 2015-06-24 | 广西大学 | Remote-sensing image coastline extracting method based on information vector machine |
CN106709918A (en) * | 2017-01-20 | 2017-05-24 | 成都信息工程大学 | Method for segmenting images of multi-element student t distribution mixed model based on spatial smoothing |
CN108197435A (en) * | 2018-01-29 | 2018-06-22 | 绥化学院 | Localization method between a kind of multiple characters multi-region for containing error based on marker site genotype |
CN108228535A (en) * | 2018-01-02 | 2018-06-29 | 佛山科学技术学院 | A kind of optimal weighting parameter evaluation method of unequal precision measurement data fusion |
CN111914865A (en) * | 2019-05-08 | 2020-11-10 | 天津科技大学 | Probability main component analysis method based on random core |
CN112053307A (en) * | 2020-08-14 | 2020-12-08 | 河海大学常州校区 | X-ray image linear reconstruction method |
US10867171B1 (en) * | 2018-10-22 | 2020-12-15 | Omniscience Corporation | Systems and methods for machine learning based content extraction from document images |
-
2021
- 2021-08-20 CN CN202110963746.2A patent/CN113779502B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100070435A1 (en) * | 2008-09-12 | 2010-03-18 | Microsoft Corporation | Computationally Efficient Probabilistic Linear Regression |
CN102254193A (en) * | 2011-07-16 | 2011-11-23 | 西安电子科技大学 | Relevance vector machine-based multi-class data classifying method |
CN103258213A (en) * | 2013-04-22 | 2013-08-21 | 中国石油大学(华东) | Vehicle model dynamic identification method used in intelligent transportation system |
CN104732215A (en) * | 2015-03-25 | 2015-06-24 | 广西大学 | Remote-sensing image coastline extracting method based on information vector machine |
CN106709918A (en) * | 2017-01-20 | 2017-05-24 | 成都信息工程大学 | Method for segmenting images of multi-element student t distribution mixed model based on spatial smoothing |
CN108228535A (en) * | 2018-01-02 | 2018-06-29 | 佛山科学技术学院 | A kind of optimal weighting parameter evaluation method of unequal precision measurement data fusion |
CN108197435A (en) * | 2018-01-29 | 2018-06-22 | 绥化学院 | Localization method between a kind of multiple characters multi-region for containing error based on marker site genotype |
US10867171B1 (en) * | 2018-10-22 | 2020-12-15 | Omniscience Corporation | Systems and methods for machine learning based content extraction from document images |
CN111914865A (en) * | 2019-05-08 | 2020-11-10 | 天津科技大学 | Probability main component analysis method based on random core |
CN112053307A (en) * | 2020-08-14 | 2020-12-08 | 河海大学常州校区 | X-ray image linear reconstruction method |
Non-Patent Citations (5)
Title |
---|
CARL EDWARD RASMUSSEN等: "Healing the relevance vector machine by augmentation", PROCEEDINGS OF THE 22ND INTERNATIONAL CONFERENECE ON MACHINE LEARNING, pages 689 * |
DAWEI ZOU等: "A Logical Framework of the Evidence Function Approximation Associated with Relevance Vector Machine", MATHEMATICAL PROBLEMS IN ENGINEERING, vol. 2020, pages 1 * |
俞炯奇;梁国钱;: "地基沉降双曲线拟合的Bayes估计", 水力发电, vol. 34, no. 02, pages 26 * |
张仕山等: "集成DS证据理论和模糊集的建筑物检测方法", 遥感信息, vol. 35, no. 5, pages 93 * |
李鑫等: "基于相关向量机算法的研究与应用综述", 信息工程大学学报, vol. 21, no. 4, pages 433 * |
Also Published As
Publication number | Publication date |
---|---|
CN113779502B (en) | 2023-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Robust low-rank kernel multi-view subspace clustering based on the schatten p-norm and correntropy | |
Hajivassiliou et al. | The method of simulated scores for the estimation of LDV models | |
Yan et al. | An adaptive surrogate modeling based on deep neural networks for large-scale Bayesian inverse problems | |
Poyiadjis et al. | Particle approximations of the score and observed information matrix in state space models with application to parameter estimation | |
Zheng et al. | Efficient variational Bayesian approximation method based on subspace optimization | |
JPWO2005119507A1 (en) | High-speed and high-precision singular value decomposition method, program and apparatus for matrix | |
Smith | Implicit copulas: An overview | |
Mollapourasl et al. | RBF-PU method for pricing options under the jump–diffusion model with local volatility | |
Yang et al. | On MCMC sampling in self-exciting integer-valued threshold time series models | |
Liu et al. | Efficient low-order system identification from low-quality step response data with rank-constrained optimization | |
CN109657693B (en) | Classification method based on correlation entropy and transfer learning | |
Chen et al. | Kalman Filtering Under Information Theoretic Criteria | |
CN107451684A (en) | Stock market's probability forecasting method based on core stochastic approximation | |
Labsir et al. | An intrinsic Bayesian bound for estimators on the Lie groups SO (3) and SE (3) | |
CN113779502A (en) | Image processing evidence function estimation method based on correlation vector machine | |
Hofert et al. | Estimators for Archimedean copulas in high dimensions | |
Yan et al. | An acceleration strategy for randomize-then-optimize sampling via deep neural networks | |
CN114756535A (en) | Bayes tensor completion algorithm based on complex noise | |
Guan et al. | A surface defect detection method of the magnesium alloy sheet based on deformable convolution neural network | |
Ait-El-Fquih et al. | Parallel-and cyclic-iterative variational Bayes for fast Kalman filtering in large-dimensions | |
Sha et al. | Adaptive restoration and reconstruction of incomplete flow fields based on unsupervised learning | |
Martinez et al. | A note on the likelihood and moments of the skew-normal distribution | |
CN111882441A (en) | User prediction interpretation Treeshap method based on financial product recommendation scene | |
Chung et al. | The variable projected augmented Lagrangian method | |
Shen et al. | A partial PPa S-ADMM for multi-block for separable convex optimization with linear constraints |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |