CN108197414A - A kind of improved Parameters of Structural Equation Model algorithm for estimating - Google Patents
A kind of improved Parameters of Structural Equation Model algorithm for estimating Download PDFInfo
- Publication number
- CN108197414A CN108197414A CN201810115977.6A CN201810115977A CN108197414A CN 108197414 A CN108197414 A CN 108197414A CN 201810115977 A CN201810115977 A CN 201810115977A CN 108197414 A CN108197414 A CN 108197414A
- Authority
- CN
- China
- Prior art keywords
- matrix
- model
- latent variable
- measurand
- cause
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/10—Numerical modelling
Abstract
The invention discloses a kind of improved Parameters of Structural Equation Model algorithm for estimating, in estimating for Parameters of Structural Equation Model, it needs to do sample data more restriction, when sample data can not meet these restrictions, the problem of model can not solve is susceptible to, the present invention proposes a kind of improved Parameters of Structural Equation Model algorithm for estimating based on matrix differential processing.The advantages of algorithm combination LISREL and PLS, derivation is carried out to 8 matrix parameters of the fitting function about structural equation model first, then fitting function is optimized using improved BFGS quasi-Newton methods, is finally emulated using LISREL, the experimental results showed that:The algorithm can effectively avoid initial value is unreasonable from causing to generate singular matrix, so that the problem of model can not solve.
Description
Technical field
The present invention relates to Parameters of Structural Equation Model method of estimation, specifically, being related to a kind of improved equation of structure mould
Shape parameter algorithm for estimating.
Background technology
Structural equation model is the deficiency that the statistician such as Joreskog are directed to traditional Causal model and path analysis, by because
Put forward after son analysis introducing path analysis, it will be as the latent variable research model representated by factorial analysis and path analysis
Representative conventional linear causality model is organically combined, and is a kind of foundation, estimation and inspection causality model
The research method being combined.Both included in model it is observable show in variable, also comprising the latent variable that can not directly observe.
Structural equation model can substitute the methods of multiple regression, path analysis, factorial analysis, analysis of covariance, clear analysis indexes
To the correlation between overall influence and index and index.
Structural equation model studies the relationship between each latent variable and its aobvious variables collection, while can obtain one
Each latent variable can be integrated, and represents the composite index of all target variables in system well.The introducing of latent variable so that
Study on Problems is more deep, and path analysis clearly summarises the correlation between aleatory variable.It is studied using structural equation model
Complicated factor relationship characteristic, have been used for safety of coal mines analysis of Influential Factors, medical risk analysis, target area preferably, project comments
The fields such as valency, the risk analysis of cash stream and contingency management, to disclose the influence relationship between the variable that studies a question.Result of study
Show that structural equation model can fully extract the much information in original variable, and latent variable is with preferable explanatory.
Invention content
It is an object of the present invention to propose a kind of improved Parameters of Structural Equation Model algorithm for estimating.The algorithm combines
The advantages of LISREL and PLS, carries out derivation to 8 matrix parameters of the fitting function about structural equation model first, then adopts
Fitting function is optimized with improved BFGS quasi-Newton methods, is finally emulated using LISREL, the experimental results showed that:It should
Algorithm can effectively avoid initial value is unreasonable from causing to generate singular matrix, so that the problem of model can not solve.
Its technical solution is as follows:
A kind of improved Parameters of Structural Equation Model algorithm for estimating, includes the following steps:
Step 1 calculates initial value x by PLS0, initial Hessian inverse matrixs H0=I, 0≤ε≤1, k:=0;
Step 2, according to following formula digital simulation function FMLGradientIfThen stop;Otherwise, it counts
It calculates
In formula:FMLFor the fitting function obtained by Maximum Likelihood Estimation Method;For fitting function FMLGradient;HkFor
Hessian inverse matrixs;X is external cause measurand;Y is internal cause measurand;ξ is external cause latent variable;η is the potential change of internal cause
Amount;δ is the measurement error of external cause measurand x;ε is the measurement error of internal cause measurand y;ΛxFor external cause measurand x with
Relationship between external cause latent variable ξ is factor loading amount of the external cause measurand on external cause latent variable;ΛyFor internal cause
Relationship between measurand y and internal cause latent variable η is factor loading amount of the internal cause measurand in internal cause latent variable;B is
Path coefficient matrix between internal cause latent variable;Γ is the path coefficient matrix that external cause latent variable influences internal cause latent variable;Φ is
The covariance matrix of latent variable ξ;Ψ is the covariance matrix of residual vector ζ;ΘεCovariance matrix for ε;ΘδAssociation side for δ
Poor matrix.
Step 3, along direction dkStep-length a is obtained using Wolfe-Powell inexact linear searchingsk, enable xk+1=xk+akdk;
Step 4 corrects H using BFGSkGenerate Hk+1,
Wherein
Step 5 judges Hessian matrix BskWhether positive definite, if BkNon- positive rule is modified using modified newton method, and
Limited storage { s is carried out using L-BFGS algorithmsk, yk};Go to step 2.
Beneficial effects of the present invention are:
The advantages of improved Parameters of Structural Equation Model algorithm for estimating combination LISREL and PLS of the present invention, first to intending
8 matrix parameters that function is closed about structural equation model carry out derivation, then using improved BFGS quasi-Newton methods to fitting
Function optimizes, and is finally emulated using LISREL, the experimental results showed that:The algorithm can effectively avoid initial value unreasonable
Cause to generate singular matrix, so that the problem of model can not solve.
Description of the drawings
Fig. 1 is that iterations compare.
Specific embodiment
Technical scheme of the present invention is described in more detail with reference to the accompanying drawings and detailed description.
1 structural equation model and model solution
1.1 the condition that structural equation model is set up
Structural equation model (Structural Equation Model, SEM) not only can be to observable variable and latent variable
It is studied, and variable can be studied and ask direct or indirect interactively, and the quality of comparable theoretical model.The equation of structure
For the relationship between index and latent variable in model, with measurement equation:
Y=Λyη+ε (1)
X=Λxξ+δ (2)
In formula, Λ x are the Factor load-matrix for reflecting external source observational variable and external source latent variable direct relation;Λ y are anti-
Reflect the Factor load-matrix of interior raw observational variable and interior raw creep magnitude relation;η is interior raw latent variable;ξ is external source latent variable;ε is
The error term of interior raw index y;δ is the error term of external source index x.
For the relationship between latent variable, the equation of structure is used:
In formula, path coefficients of the B between interior raw latent variable;Γ is that external source latent variable internally gives birth to the path system that latent variable influences
Number;For residual error item.
Structural equation model has the function of to verify compared with other analysiss by synthesis method.In certain theory analysis
On the basis of, the influence factor of complication system is determined, and is theorized hypothesized model according to the correlativity between factor, fortune
With the means of questionnaire survey, required data are acquired, and according to real data and the fitting degree of theory hypothesis model, it is right
Theoretical model carries out appropriate analysis, so as to the theoretical model confirmed or falsfication is assumed in advance.It simultaneously can be with latent in testing model
Relationship between variable and measurand and measurand and error variance, and then obtain direct effect of the dependent variable to fruit variable
Fruit, indirect effect and total effect.The condition that structural equation model is set up is as follows:
1) mean value for measuring error in equation item ε, δ is zero;
2) mean value of equation of structure residual error item ζ is zero;
3) uncorrelated between error term ε, δ and the factor η, ξ, ε is uncorrelated to δ;
4) residual error item ζ and factor ξ, it is uncorrelated between error term ε, δ.
One complete structural equation model includes following eight parameter matrixs:Λx, Λy, B, Γ, Φ, Ψ, Θε, Θδ。
The matrix of front 4 occurs in equation or the equation of structure is measured.Φ be latent variable ξ covariance matrix, Ψ for residual error to
Measure the covariance matrix of ζ, ΘεAnd ΘδIt is the covariance matrix of ε and δ respectively.
Covariance matrix solves in 1.2 structural equation models
During equation solution, for be obtained the p+q dimensional vectors (y ', x ') of whole indexs composition ' covariance matrix, can
X, the respective covariance matrixes of y and the covariance matrix between them is first obtained.Ask covariance that can obtain by formula (2) both sides:
The covariance matrix of gained x is:
∑xx(θ)=ΛxΦΛ′x+Θδ (5)
Similarly the covariance matrix of y is:
∑yy(θ)=ΛyE(ηη′)Λ′y+Θε (6)
Due to:
η=(I-B)-1(Γ ξ+ζ)=A (Γ ξ+ζ) (7)
Wherein (I-B)-1=A, implicit assumption here are:I-B is invertible matrix.It can be obtained by formula (7) simultaneous:
E (η η ')=A (Γ Φ Γ '+Ψ) A ' (8)
(8) formula substitution (6) formula is obtained:
∑yy(θ)=ΛyA(ΓΦΓ′+Ψ)A′Λ′y+Θε (9)
The covariance matrix of y and x is:
So (y ', x ') ' covariance matrix be represented by the functions of 8 parameter matrixs:
2 improved Parameters of Structural Equation Model methods of estimation
During above structure equation model is studied, model estimation is the core content of structural equation model.Model
The process of estimation is exactly the optimization process of fitting function.Fitting function in the equation of structure is mainly obtained by maximum-likelihood method estimation
, it is the majorized function (formula 11) about 8 parameter matrixs.For this special majorized function, due to traditional structure side
Journey model has limitation in itself, it needs to do sample data more restriction, when the sample data of problem can not meet this
It is a little to limit, it is susceptible to the problem of model can not solve.The selection of initial value also affects the solution of model.Utilize matrix differential
Knowledge carries out derivations to fitting function about this 8 matrix parameters, and using improved BFGS quasi-Newton methods to fitting function into
Row optimization, avoid initial value it is unreasonable cause generate singular matrix so that the problem of model can not solve.
The Maximum-likelihood estimation of 2.1 equations of structure
The estimation procedure of structural equation model is totally different from traditional statistical method, it is not to pursue to reduce sample as possible
Difference between the match value and observation of each single item record, but pursue and reduce sample covariance matrix and model regeneration as possible
Difference between matrix.Therefore, the foundation of parameter Estimation is exactly that parameter is asked to cause between regenerator matrix and sample covariance matrix
" gap " is minimum.This " gap " is fitting function, is the function of model parameter.If regenerator matrix is ∑ (θ), sample association side
Poor matrix is S, then fitting function can be denoted as to F (S, ∑ (θ)), and optimal pa rameter estimation value is exactly to make fitting function value minimum
Value.If set model is correct, ∑ (θ) will closely approximate S.
For method for parameter estimation in models fitting there are many planting, each method has the advantages of respective and usable condition.Most often
Method of estimation is Maximum Likelihood Estimation Method (ML).Its fitting function is:
FML=log | ∑ (θ) |+tr (S ∑s-1(θ))-log|S|-(p+q) (12)
In formula (12), the mark of tr (A) representing matrixes A;Log | A | the logarithm of the determinant of representing matrix A.Generally assume that S and
∑ (θ) is all positive definite matrix, thus their determinant is more than zero, and ∑ (θ) has inverse matrix.If practical data are discontented with
This will be it is assumed that will make solution procedure that can not carry out enough.Fitting function FMLIt is to be changed by the logarithm of maximum-likelihood method function, because
And it needs to assume that indicator vector obeys multiple normal distribution.Cause FMLReach the estimation of minimum valueReferred to as maximum-likelihood method is estimated
Meter, is abbreviated as ML estimations, it has following property.
1) ML estimations are progressive unbiased esti-mators.The ∑ i.e. when sample size increasesConverge on θ.It is to be understood that for
Large sample, just on averageWith θ without what difference.
2) ML estimations are consistent Estimations.It is to be understood that for large sample,There is the possibility of substantial deviation very with θ
It is small.
3) ML estimations are progressive effective estimations.I.e. there is no another consistent Estimation, its progressive variance ratio ML estimations
It is also small.
4) ML estimations are almost normal distributions.I.e. in large sample, the distribution of ML estimations is similar to a normal distribution.This
One property is critically important in notable examine, it illustrate the parameter of estimation with the ratio of its standard error the approximate obedience in large sample
T is distributed, even standardized normal distribution.
5) with few exceptions, ML is estimated to be scale invariability.That is ML estimations are not influenced by units of measurement, are changed and are measured
Unit does not interfere with the result of model.If there is scale invariability, it is meant that the estimation obtained based on covariance matrix is with being based on
The estimation that correlation matrix obtains is the same.
6) ML estimations can carry out entire model testing to hypothesized model.Chi-square statistics amount can be obtained by estimation, and
It is denoted as χ2, it is not only an important fit indices, and is the basis of most of fit indices.
The above-mentioned advantage of ML estimations, makes Maximum-likelihood estimation become most common estimation in the equation of structure.It writes from memory in LISREL
The estimation recognized is exactly ML estimations.Although ML estimations need to assume that index is normal distribution, many scholars point out, ML estimations exist
General occasion is robust iterative, i.e., when normal distribution condition is unsatisfactory for, the conclusion based on ML estimations is still believable.
2.2 maximum likelihood parameter estimations optimize
1) derivation of fitting function
The process of parameter Estimation is exactly the optimization process of fitting function.Fitting function is about 8 parameter matrix Λx, Λy,
B, Γ, Φ, Ψ, Θε, ΘδIn free variable function.Consider that some variable parameters are fixed in 8 parameter matrixs, then
This minimization problem can be defined as below:
If λ '=(λl..., λk) it is the vector that all 8 parameter matrix free variables form.Then FMLBe considered as about
λ '=(λl..., λk) function F (λ), function is continuously differentiable.By Maximum Likelihood Estimation Method obtain fitting function be:
FML=log | ∑ (θ) |+tr (S ∑s-1(θ))-log|S|-(p+q) (13)
It must be to fitting function F using BFGS quasi-Newton methodsMLAbout 8 parameter matrix Λx, Λy, B, Γ, Φ, Ψ, Θε,
ΘδMiddle element carries out derivation.For the derivative for the matrix that gets parms, first with the inference of following matrix differential:
Proposition 1 defines matrix differential dX=(dXij), for function F (X), if dF=tr (CdX '), has
It proves:Due to
Then have
Proposition 2 has d (log | X |)=tr (X according to matrix differential knowledge-1DX), wherein log | X | representing matrix X determinants
Logarithm.
It proves:Due to
Wherein X*It is the adjoint matrix of X, and | X |-1X*=X-1, then:
Therefore proposition d (log | X |)=tr (X-1DX it) sets up.
Proposition 3d (X-1)=- X-1dXX-1。
It proves:Due to
d(XX-1)=dXX-1+Xd(X-1)=0 (17)
Then there are d (X-1)=- X-1dXX-1。
Proposition 4 can be obtained according to proposition 3:
dtr(AX-1)=tr (Ad (X-1))=- tr (AX-1dXX-1)=- tr (X-1AX-1dX) (18)
According to the proposition of more than matrix differential, Bu FangsheIt enablesThen have:
To fitting function FMLIt differentiates:
Wherein Ω=∑-1(∑-S)∑-1。
According to the definition of formula (21) and A and D, have:
∑yy=ΛyDΦD′Λ′y+ΛyAΨA′Λ′y+Θε (22)
∑xy=ΛxΦD′Λ′y (23)
∑yx=∑ 'xy (24)
∑xx=ΛxΦΛ′x+Θδ (25)
It can then be obtained using formula (19), (20) and (22):
d∑xx=d ΛxΦΛ′x+ΛxdΦΛ′x+ΛxΦdΛ′x+dΘδ (28)
d∑yx=d ∑s 'xy (29)
Above equation group is substituted into formula (21), and notice tr (C ' dX)=tr (dX ' C)=tr (CdX '), then can be obtained
To the partial derivative of Jacobian matrix:
Directly fitting function can be optimized using improved BFGS algorithms after obtaining the derivative of Jacobian matrix.
2) the BFGS algorithms of limited storage
When using BFGS quasi-Newton methods, the possible dimension of Hessian matrixes is very big or dense, this is unfavorable for counting
It calculates.The BFGS algorithms of limited storage do not need to preserve entire dense n × n matrix, it is only necessary to preserve partial information, and right
Hessian matrix updates.It builds Hessian matrixes using the information that iteration obtains several times recently.
By BFGS quasi-Newton methods, it was known that BFGS, which often walks iteration, following form:
Wherein akIt is step value, HkFollowing update is carried out in each iteration:
Hk+1=V 'kHkVk+ρksks′k (39)
Wherein
And
H is knownk+1It is to { s by using vectork, ykUpdate Hk's.Hessian inverse matrixs HkIt is typically dense
, when variable it is in a large number when, its storage and calculating are relatively difficult.In order to overcome this problem, a fixed number is stored
The vector of mesh (being defined as m) is to { sk, yk, and pass through update { sk, ykCome to HkIt is updated.After new iteration is carried out, most
Old vector { sk, ykBy new { sk, ykReplaced, guarantee number m is fixed in this way, and { sk, ykAll it is by m times nearest
What iteration obtained, generally, m, which takes, 3 to 20 can generate satisfied result.
H is described belowkRenewal process, expansion (39), wherein vector { sk, yk, i=k-m ..., k-1 and HkExpansion
It is walked to m, obtains following expansion:
It is wherein initialIt is different from the BFGS algorithms of standard, initial matrix hereIt is not fixed, but according to
Iteration is different and changes.For initialSelection, enableWherein
From above expression formula, can be obtainedAlgorithm:
Algorithm 1:L-BFGS updates
Step 1 enables
Step 2 carries out following iteration:
For i=k-1, k-2 ... k-m
αi=ρis′iq;
Q=q- αiyi;
end(for)
Step 3 enablesAnd do following iteration:
For i=k-m, k-m+1 ..., k-1
β=ρiy′ir;
R=r+si(αi-β);
end(for)
Step 4 terminates, and obtains result
Algorithm 2:L-BFGS algorithms
Step 1 chooses initial value x0, amount of storage m;Enable k=0;
The 2 setting condition of convergence of step, carries out following iteration
It is selected according to (43)
It is calculated by formula (38)Wherein step-length akMeet Wolfe search conditions.
If k > m delete { s from storage vectork-m, yk-m};
It calculates and preserves sk=xk+1-xk,K=k+1;
Step 3 meets the condition of convergence, terminates.
3) amendment of Hessian matrixes
The main difficulty that Newton method faces is Hessian matrix BskNon- positive definite, at this time secondary model not necessarily have minimum
Point, even without stationary point.Work as BkNot timing, secondary model function are unbounded.
Goldfeld etc. proposes Hessian matrix BskA kind of modification method, be exactly by BkIt is changed to Bk+vkI, wherein vk>
0 so that Bk+vkI positive definites.Desirably, vkIt is not too big in making Bk+vkThe minimum v of I positive definites.The algorithm frame of this method
Frame is as follows:
Algorithm 3:Modified newton method
Choose initial value x0=Rn, kth step iteration be:
Step 1 enablesIf BkPositive definite, vk=0;
Otherwise vk> 0, and vkIt is calculated by (45);
Step 2 calculatesCholesky decompose,
Step 3 calculates
Step 4 enables xk+1=xk+akdk, wherein step-length akMeet Wofle search conditions.
V in above-mentioned algorithmkIt should be by mould less times greater than BkMinimum characteristic value.It is changed used here as Gill and Murray
Cholesky decomposition algorithms determine vk, to BkIt is obtained using the Cholesky of modification:Bk+ E=LkDkL′k。
If E=0 enables vk=0, otherwise v is calculated using Gerschgorin circles theoremskA upper bound b1,
Wherein λiFor BkA characteristic value.In addition, it enablesWherein eiiIt is i-th of diagonal element of E, it
It is also vkA upper bound, enable
vk=min { b1, b2} (45)
Modified Cholesky decomposition algorithms are given below.
Algorithm 4:The Cholesky decomposition algorithms of modification
Step 1 calculates the boundary of factoring element.Enable β2=max { γ, ξ/v, ε } is whereinγ and ξ
It is B respectivelykDiagonal element and nondiagonal element maximum norm, ε is computational accuracy.
2 initialization of step, enables j=1, cii=bij, i=1 ..., n.Wherein bijIt is BkElement.
Step 3 seeks minimum index q so thatExchange BkQ rows and j rows, q row and j row information.
Step 4 calculates the jth row of L, and seeks lijdijMaximum norm:
Enable ljs=cjs/dss, s=1 ..., j-1;
It calculates
It enablesIf j=n enables θj=0.
Step 5 calculates the jth diagonal element of D:
djj=max δ, | cij|, θj 2/β2};
The diagonal element of E is revised as Ej=djj-cjjIf j=n, stop.
6 correction diagonal element of step and row index:
Enable cii=cii-cij 2/djj, i=j+1 ..., n;J=j+1 turns step 3.
The realization of 3 improved structure equation model method for parameter estimation
3.1 improved structure equation model parameter estimation algorithm
It is optimized using improved BFGS algorithms, wherein when the non-positive definite of Hessian matrixes, is corrected using algorithm 3
Hessian matrixes, and step-length is obtained using Wolfe-Powell inexact linear searchings.The initial value of algorithm is returned by offset minimum binary
Analysis is returned to carry out estimation acquisition to parameter.Such initial value can reduce iterative steps closer to optimal solution.And avoid initial value
It is unreasonable and generate singular matrix, so that the problem of model can not solve.Algorithm combines the advantages of LISREL and PLS, more has
Effect ground carries out parameter Estimation to model, so as to the property of better research structure equation.Improved structure equation model model
Algorithm for estimating realizes that step is as follows:
Step 1 calculates initial value x by PLS0, initial Hessian inverse matrixs H0=I, 0≤ε≤1, k:=0.
Step 2 is according to formula (30)~formula (38) digital simulation function FMLGradientIfThen stop;Otherwise,
It calculates
3 are walked along direction dkStep-length a is obtained using Wolfe-Powell inexact linear searchingsk, enable xk+1=xk+akdk。
Step 4 corrects H using BFGSkGenerate Hk+1,
Wherein sk=xk+1-xk,
Step 5 judges Hessian matrix BskWhether positive definite, if BkNon- positive rule is modified using algorithm 3, and using calculation
Method 2 carries out limited storage { sk, yk}.Turn step 2.
3.2 examples are with discussing
In order to verify the validity of inventive algorithm, key factor data instance is influenced to analyze cbm development geology,
Inventive algorithm and traditional structure equation model parameter estimation algorithm are compared, example selects 100,200,300 respectively,
For 400,500,600 sample datas to two different test of heuristics, iteration result is as shown in Figure 1.
Analysis finds, traditional structure equation model parameter estimation algorithm can not to "current" model when sample number is less than 200
It solves, and inventive algorithm is in advance handled initial value by partial least-squares regressive analysis can just obtain related solution.
Further analysis finds that, with the increase of sample data, the gap of two kinds of algorithms is more and more apparent, and inventive algorithm is relative to biography
The iterations of system algorithm significantly reduce, it is seen that and inventive algorithm, which is not only avoided that initial value is unreasonable, to be caused to generate singular matrix,
So that the problem of model can not solve, and can effectively improve the solution efficiency of model parameter estimation.
Initial value is handled due to introducing partial least-squares regressive analysis in advance, improves Parameters of Structural Equation Model
Estimate the availability of initial value, avoid the problem of model calculation can not be solved when there is singular matrix, utilization is improved
BFGS algorithms optimize the operational efficiency for improving algorithm.Show the performance and iteration speed of the algorithm by example calculation
All it is greatly improved.
The foregoing is only a preferred embodiment of the present invention, protection scope of the present invention is without being limited thereto, it is any ripe
Those skilled in the art are known in the technical scope of present disclosure, the letter for the technical solution that can be become apparent to
Altered or equivalence replacement are each fallen in protection scope of the present invention.
Claims (1)
1. a kind of improved Parameters of Structural Equation Model algorithm for estimating, which is characterized in that include the following steps:
Step 1 calculates initial value x by PLS0, initial Hessian inverse matrixs H0=I, 0≤ε≤1, k:=0;
Step 2, according to following formula digital simulation function FMLGradientIfThen stop;Otherwise, it calculates
In formula:FMLFor the fitting function obtained by Maximum Likelihood Estimation Method;For fitting function FMLGradient;HkFor Hessian
Inverse matrix;X is external cause measurand;Y is internal cause measurand;ξ is external cause latent variable;η is internal cause latent variable;δ is outer
Because of the measurement error of measurand x;ε is the measurement error of internal cause measurand y;ΛxIt is that external cause measurand x and external cause are potential
Relationship between variable ξ is factor loading amount of the external cause measurand on external cause latent variable;ΛyFor internal cause measurand y
It is factor loading amount of the internal cause measurand in internal cause latent variable with the relationship between internal cause latent variable η;B is internal cause creep
Path coefficient matrix between amount;Γ is the path coefficient matrix that external cause latent variable influences internal cause latent variable;Φ is latent variable ξ's
Covariance matrix;Ψ is the covariance matrix of residual vector ζ;ΘεCovariance matrix for ε;ΘδCovariance matrix for δ;
Step 3, along direction dkStep-length α is obtained using Wolfe-Powell inexact linear searchingsk, enable xk+1=xk+αkdk;
Step 4 corrects H using BFGSkGenerate Hk+1,
Wherein
Step 5 judges Hessian matrix BskWhether positive definite, if BkNon- positive rule is modified, and use using modified newton method
L-BFGS algorithms carry out limited storage { sk, yk};Go to step 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810115977.6A CN108197414A (en) | 2018-01-30 | 2018-01-30 | A kind of improved Parameters of Structural Equation Model algorithm for estimating |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810115977.6A CN108197414A (en) | 2018-01-30 | 2018-01-30 | A kind of improved Parameters of Structural Equation Model algorithm for estimating |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108197414A true CN108197414A (en) | 2018-06-22 |
Family
ID=62592523
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810115977.6A Pending CN108197414A (en) | 2018-01-30 | 2018-01-30 | A kind of improved Parameters of Structural Equation Model algorithm for estimating |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108197414A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110207696A (en) * | 2019-06-06 | 2019-09-06 | 南京理工大学 | A kind of nine axis movement sensor attitude measurement methods based on quasi-Newton method |
CN112559848A (en) * | 2020-12-14 | 2021-03-26 | 华南理工大学 | Manifold searching method of optimal weighted directed graph |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096834A (en) * | 2016-06-02 | 2016-11-09 | 淮南师范学院 | A kind of coal mine safety management risk evaluating method based on SEM FSVM |
CN107064896A (en) * | 2017-03-30 | 2017-08-18 | 南京信息工程大学 | Based on the MIMO radar method for parameter estimation for blocking amendment SL0 algorithms |
-
2018
- 2018-01-30 CN CN201810115977.6A patent/CN108197414A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096834A (en) * | 2016-06-02 | 2016-11-09 | 淮南师范学院 | A kind of coal mine safety management risk evaluating method based on SEM FSVM |
CN107064896A (en) * | 2017-03-30 | 2017-08-18 | 南京信息工程大学 | Based on the MIMO radar method for parameter estimation for blocking amendment SL0 algorithms |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110207696A (en) * | 2019-06-06 | 2019-09-06 | 南京理工大学 | A kind of nine axis movement sensor attitude measurement methods based on quasi-Newton method |
CN112559848A (en) * | 2020-12-14 | 2021-03-26 | 华南理工大学 | Manifold searching method of optimal weighted directed graph |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Vallvé et al. | Graph SLAM sparsification with populated topologies using factor descent optimization | |
Tron et al. | A survey on rotation optimization in structure from motion | |
CN110223509B (en) | Missing traffic data restoration method based on Bayesian enhanced tensor | |
Ferrero et al. | Registration-based model reduction of parameterized two-dimensional conservation laws | |
CN108197414A (en) | A kind of improved Parameters of Structural Equation Model algorithm for estimating | |
Li et al. | Causality-based attribute weighting via information flow and genetic algorithm for naive Bayes classifier | |
Jang et al. | Siamese network-based health representation learning and robust reference-based remaining useful life prediction | |
Katz et al. | Using directional fibers to locate fixed points of recurrent neural networks | |
CN114971345A (en) | Quality measuring method, equipment and storage medium for built environment | |
Wang et al. | Stochastic multiscale modeling for quantifying statistical and model errors with application to composite materials | |
Ganesh et al. | Slimming neural networks using adaptive connectivity scores | |
CN117092582A (en) | Electric energy meter abnormality detection method and device based on contrast self-encoder | |
Kong et al. | Multivariate time series anomaly detection with generative adversarial networks based on active distortion transformer | |
CN116027158A (en) | High-voltage cable partial discharge fault prediction method and system | |
CN114528917A (en) | Dictionary learning algorithm based on SPD data of Riemannian manifold cut space and local homoembryo | |
Lardin-Puech et al. | Analysing large datasets of functional data: a survey sampling point of view | |
CN113569384A (en) | Digital-analog-linkage-based online adaptive prediction method for residual service life of service equipment | |
Alshara | Multilayer Graph-Based Deep Learning Approach for Stock Price Prediction | |
Shin et al. | Penalized additive neural network regression | |
Zhao et al. | Sparse Bayesian Tensor Completion for Data Recovery in Intelligent IoT Systems | |
Li et al. | Decision Making-Interactive and Interactive Approaches | |
CN110298490A (en) | Time series Combination power load forecasting method and computer readable storage medium based on multiple regression | |
US20230259818A1 (en) | Learning device, feature calculation program generation method and similarity calculator | |
Chen et al. | Topological Convolutional Neural Networks for Transient Stability Assessment on Massive Historical Online Power Grid Data | |
Han et al. | Fault prediction of shield machine based on rough set and BP neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180622 |