JP2021068456A5 - - Google Patents

Download PDF

Info

Publication number
JP2021068456A5
JP2021068456A5 JP2020183948A JP2020183948A JP2021068456A5 JP 2021068456 A5 JP2021068456 A5 JP 2021068456A5 JP 2020183948 A JP2020183948 A JP 2020183948A JP 2020183948 A JP2020183948 A JP 2020183948A JP 2021068456 A5 JP2021068456 A5 JP 2021068456A5
Authority
JP
Japan
Prior art keywords
variables
variable
analysis
sales
correlation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2020183948A
Other languages
Japanese (ja)
Other versions
JP2021068456A (en
Filing date
Publication date
Application filed filed Critical
Priority to JP2020183948A priority Critical patent/JP2021068456A/en
Priority claimed from JP2020183948A external-priority patent/JP2021068456A/en
Publication of JP2021068456A publication Critical patent/JP2021068456A/en
Publication of JP2021068456A5 publication Critical patent/JP2021068456A5/ja
Pending legal-status Critical Current

Links

Description

回帰分析の”多重共線性”等を解消し、説明変数の適切な目的変数への寄与度を示す偏回帰係数を求めて経営資料とする計算技術。A calculation technology that eliminates the "multicollinearity" of regression analysis and obtains a partial regression coefficient that indicates the degree of contribution of explanatory variables to appropriate objective variables and uses it as management data.

本発明は、産業上の成果である売上高等と原因である販売経費等の関係を重回帰分析で行う際に生ずる“多重共線性”等の問題を解消し、売上高に寄与する販売経費などの原因変数の寄与度を示す適切な偏回帰係数biを求めることを目的とする計算技術である。The present invention solves problems such as "multicollinearity" that occur when the relationship between sales, etc., which is an industrial result, and sales expenses, etc., which is the cause, is performed by multiple regression analysis, and sales expenses, etc. that contribute to sales. This is a calculation technique for the purpose of obtaining an appropriate partial regression coefficient bi indicating the contribution of the causative variable of.

重回帰分析を適用し、成果と原因の関係を解明する場合に、説明変数に“多重共線性”等の問題が生じ、適切な経営判断ができない。このことを次の仮設分析例で”多重共線性”による偏回帰係数biの歪みを、また、説明変数間にある程度高い相関のある場合のbiの歪みを示す。When applying multiple regression analysis to elucidate the relationship between results and causes, problems such as "multicollinearity" occur in the explanatory variables, and appropriate management decisions cannot be made. In the following hypothetical analysis example, the distortion of the partial regression coefficient bi due to "multicollinearity" is shown, and the distortion of bi when there is a high correlation between the explanatory variables to some extent.

重回帰式 Y=b0+b1X1+b2X2 (1)
基準化した変数の重回帰式
Ay=b1A1+b2A2 (2)
仮設分析例
売上高とその原因である人件費、売場面積は次のようであるとする。
売上高(百万円) 人件費(10万円) 売場面積(10m
Y X1 X2
A店 46 97 75
B店 18 61 59
C店 36 83 72
D店 22 37 51
E店 27 75 48
この資料に対して説明変数が2ケである場合の重回帰分析を適用する。
この計算結果は、次の通りである。
売上高対人件費の相関係数 R1 :0.84
売上高対売場面積の相関係数 R2 :0.79
人件費対売場面積の相関係数 r12 :0.72
b1=0.56 b2=0.38
売上高に対する人件費、売場面積の相関係数にそれほど差がないのに、原因の成果に対する寄与度を示す偏回帰係数b1、b2の数値に大きな差がある。これは人件費、売場面積の間に相当程度の高い相関があることによるものであり、実務経験者である経営者もこの分析結果の妥当性に疑問を持つ。
Multiple regression equation Y = b0 + b1X1 + b2X2 (1)
Multiple regression equations for standardized variables
Ay = b1A1 + b2A2 (2)
Temporary analysis example Sales, labor costs, and sales floor area, which are the causes, are as follows.
Sales (million yen) Personnel costs (100,000 yen) Sales floor area (10m 2 )
Y X1 X2
Store A 46 97 75
Store B 18 61 59
C store 36 83 72
Store D 22 37 51
E store 27 75 48
A multiple regression analysis is applied to this material when there are two explanatory variables.
The result of this calculation is as follows.
Correlation coefficient between sales and labor costs R1: 0.84
Correlation coefficient of sales to sales floor area R2: 0.79
Correlation coefficient between labor cost and sales floor area r12: 0.72
b1 = 0.56 b2 = 0.38
Although there is not much difference in the correlation coefficient between labor cost and sales floor area with respect to sales, there is a large difference in the numerical values of the partial regression coefficients b1 and b2, which indicate the degree of contribution of the cause to the result. This is due to the fact that there is a fairly high correlation between labor costs and sales floor area, and even managers with practical experience have doubts about the validity of this analysis result.

分析例2(実施例3より、分析結果だけを示す。)
事例名 J K L
目的変数対相関係数:R 0.51 0.62 0.54
偏回帰係数: bi 0.10 0,43 0.33
これは適当に作られた事例についての分析だが、事例Jの相関係数に対する偏回帰係数の数値が小さい。これは実施例の説明変数相互の相関係数の数値が0.5前後とやや大きいことによるが、この程度の数値なら特別高い相関とは言えない。
Analysis Example 2 (From Example 3, only the analysis results are shown.)
Case name JKL
Objective variable pair correlation coefficient: R 0.51 0.62 0.54
Partial regression coefficient: bi 0.10 0,43 0.33
This is an analysis of a properly made case, but the numerical value of the partial regression coefficient with respect to the correlation coefficient of case J is small. This is because the value of the correlation coefficient between the explanatory variables of the examples is a little large, around 0.5, but it cannot be said that the correlation is particularly high if the value is about this level.

回帰分析は一組(X1,X2,..,Xp)の説明変数が目的変数Yをどれだけ説明できるかの分析で、原因となるXiの数値は物理学、化学等による独立性の高いものであったとされ、一組を構成する構成要素個々のXiの貢献度を求めるものではなかったとされる。その後、社会経済現象の成果である企業の売上高とその原因である販売諸経費の関係分析に、重回帰分析が利用されるようになると、採用する諸原因は売上高と高い因果関係のある人件費、販売経費、売場面積等が選択されるようになり、売上高数値を媒介としてそれら原因間の相関係数も高く、売上高への寄与度を示すbiも常識、理論に反する異常値を示すようになった。これが”多重共線性“の問題である。Regression analysis is an analysis of how much an explanatory variable of a set (X1, X2, ..., Xp) can explain the objective variable Y, and the numerical value of Xi that causes it is highly independent by physics, chemistry, etc. It is said that the contribution of Xi of each component constituting the set was not calculated. After that, when multiple regression analysis was used to analyze the relationship between corporate sales, which is the result of socio-economic phenomena, and sales expenses, which are the causes, the causes to be adopted have a high causal relationship with sales. Personnel costs, sales costs, sales floor area, etc. have come to be selected, the correlation coefficient between these causes is high through the sales figures, and bi, which indicates the degree of contribution to sales, is an abnormal value contrary to common sense and theory. Came to show. This is the problem of "multicollinearity".

基準化変数
上記の(1)式について、説明変数Xiの数値単位が人数であったり面積mであったりすると、求められるbiの数値の意味が曖昧となるので、以後の分析で多用される相関行列の作成に変数を基準化する必要があることから、変数は全て基準化数値を使用する。
以下A1,A2はX1、X2の基準化した変数である。
基準化 :Ai=(Xi−Xi)/σi (3)
Xi:Xiの平均 σi:Xiの標準偏差
なお、Y,X1,X2を基準化することにより、Aiの数値は平均0、標準偏差1の数値集団となる。
Standardized variable Regarding the above equation (1), if the numerical unit of the explanatory variable Xi is the number of people or the area m 2 , the meaning of the required numerical value of bi becomes ambiguous, so it is often used in the subsequent analysis. Since variables need to be standardized to create a correlation matrix, all variables use standardized numbers.
Hereinafter, A1 and A2 are standardized variables of X1 and X2.
Standardization: Ai = ( Xi-Xi ) / σi (3)
Xi : Average of Xi σi: Standard deviation of Xi By standardizing Y, X1 and X2, the numerical value of Ai becomes a numerical group having an average of 0 and a standard deviation of 1.

正規方程式の相関行列
以下の分析では主に重回帰分析の正規方程式の相関行列式を多用するので、ここでこれを導入する。

Figure 2021068456
正規方程式はこれで次のように表示できる。
Figure 2021068456
Correlation matrix of normal equations In the following analysis, the correlation matrix equation of the normal equations of multiple regression analysis is mainly used, so this is introduced here.
Figure 2021068456
The normal equation can now be displayed as:
Figure 2021068456

特許第5120820号「”多重共線性“を解消する回帰分析によって適正な標準宅地の評価を行う装置」 ”多重共線性”の解決は、正規方程式の行列表示で、左右両辺の項目数値のアンバランスをいかに合理的に抑えるかにかかっている。この特許は説明変数が2変数の場合で、目的変数である地価の媒介による説明変数相互の比較的高い相関部分を次のように除去して、行列第1行の非対角要素2項を構成している。
r12=r12(1−R1R2)
これは個々の事例についてではなく、一連の事例に対してR1R2の数値を一括除去する方法である。この除去により、それなりに”多重共線性“を解消する効果を期待できるが大まかな計算である。
特許第5585921号「産業上の成果である売上高等と原因である販売経費等の関係分析に”多重共線性“解消の重回帰分析を適用して解明する装置」 この特許の”多重共線性“解消は[特許文献1]の除去に比して、個別の事例を対象としたより丁寧な除去であるが、基準化した説明変数そのものに対する除去であるため、事例を基準化変数の正と負の符号同調事例に分類し、それぞれについて行うもので、一貫性がなく、求めた相関行列も非対称行列になっている。
Patent No. 5120820 "A device that evaluates an appropriate standard residential land by regression analysis that eliminates" multicollinearity "" The solution of "multicollinearity" depends on how to reasonably suppress the imbalance of the item values on both the left and right sides in the matrix display of the normal equation. In this patent, when the explanatory variables are two variables, the relatively high correlation between the explanatory variables mediated by the land price, which is the objective variable, is removed as follows, and the off-diagonal element 2 term in the first row of the matrix is removed. It is configured.
r12 = r12 (1-R1R2)
This is a method of collectively removing the numerical values of R1R2 for a series of cases, not for individual cases. This removal can be expected to have the effect of eliminating "multicollinearity" as it is, but it is a rough calculation.
Patent No. 5585921 "A device that applies multiple regression analysis to eliminate" multicollinearity "to analyze the relationship between sales, etc., which is an industrial achievement, and sales expenses, etc., which is the cause." The elimination of "multicollinearity" in this patent is a more polite removal for individual cases than the removal in [Patent Document 1], but since it is a removal for the standardized explanatory variable itself, it is a case. Is classified into positive and negative sign tuning cases of the standardized variable, and it is performed for each case. It is inconsistent and the obtained correlation matrix is also an asymmetric matrix.

”多重共線性“問題を回避するために一般に次のような方法が考えられている。
相関の高い説明変数のどちらか一つの削除
主成分分析
リッジ回帰
説明変数の削除は成果である売上高に対して有意に選択された原因である従業員数や販売経費を省くのは適切でない。主成分分析はP個の変数X1,X2,..Xpをある条件の下にp>mの綜合変数j(j=1〜m)に要約する方法で、説明変数の係数biを求めるものではない。
リッジ回帰は相関行列の対角要素に一定数kを加えて偏回帰係数を求めるもので、概要を次に示す。

Figure 2021068456
これは、非対角要素の相関係数r12の数値を小さくする効果があり、その分だけ“多重共線性”が解消されるが、相関行列の対角要素に加える加算数値の選択に確たる基準がなく、恣意的であまり利用されていないと思う。
Arthur E.Hoer and Robert,W.Kennard“Ridge regression”1970年 2月 The following methods are generally considered to avoid the "multicollinearity" problem.
Deletion of one of the highly correlated explanatory variables Principal component analysis Ridge regression Deletion of explanatory variables is not appropriate to omit the number of employees and sales expenses that are the significantly selected causes for the resulting sales. Principal component analysis is performed on P variables X1, X2. .. The coefficient bi of the explanatory variable is not obtained by the method of summarizing Xp into the integrated variable j (j = 1 to m) of p> m under a certain condition.
Ridge regression obtains the partial regression coefficient by adding a certain number of k to the diagonal elements of the correlation matrix, and the outline is shown below.
Figure 2021068456
This has the effect of reducing the value of the correlation coefficient r12 of the off-diagonal element, and the "multicollinearity" is eliminated by that amount, but it is a reliable criterion for selecting the additional value to be added to the diagonal element of the correlation matrix. I don't think it's used, it's arbitrary and it's not used much.
Arthur E. Whoer and Robert, W. et al. Kennard "Ridge regression" February 1970

発明が解決しようとする課題Problems to be solved by the invention

リッジ回帰の”多重共線性“解消は対角要素を対象として非対角要素数値を小さくする間接的方法であるのに対して、本発明は相関行列の非対角要素を直接の分析対象とする。 While the "multicollinearity" elimination of ridge regression is an indirect method of reducing the off-diagonal element values for diagonal elements, the present invention directly analyzes the off-diagonal elements of the correlation matrix. do.

重回帰分析の成果と原因の関係分析の意義
重回帰分析で社会経済現象の成果である売上高等の原因である説明変数の寄与度を分析し、経営の参考とするのは経営の合理化を図る経営者の当然の考え方である。ただし、多変量解析で取り上げられる重回帰分析では”多重共線性“等の問題があり、十分に利用されていない。
重回帰分析の相関行列による2変数の正規方程式(数式4)の左辺、右辺の数値を見ると、この左辺の定数b1を別にすれば、説明変数XiとXjの2ケの相関係数で、右辺はR1yの1ケの相関係数である。各相関係数は(1を含み0との間の数値)なので、左辺の数値は右辺数値の説明変数の“2倍”程である。
また、この行列の右辺は目的変数Yと左辺の説明変数行列の対角要素との相関係数となっていて他の変数はなく、左辺の対角要素変数の相関係数数値は1である。このことから、各行の対角要素を当該行の“主力変数”とし、非対角要素を従属変数とする。
なお、回帰分析で求めたbi数値の相対比が,原相関係数のRi値の相対比に近い数値となるためには、説明変数の数に対応した行、列の行列が“単位行列”に近い数値であることが望ましい。
左辺に数値重複分のあることが予想されるので、対角要素を基準変数とし、非対角要素の説明変数から対角要素基準変数分を除去する。
除去残余分は非対角要素の特性を示すものであり、また、説明変数の選択、ケ数の適否を示すものでもあるので重要である。この分析には多くの派生的問題があるが、情報過多といえる現代の社会経済の数理的解析に適切な資料を提供する重要な意義をもっている。
Relationship between the results of multiple regression analysis and causes Significance of analysis Multiple regression analysis analyzes the contribution of explanatory variables that are the cause of sales, etc., which is the result of socio-economic phenomena, and uses it as a reference for management to streamline management. It is a natural way of thinking of the manager. However, the multiple regression analysis taken up in the multivariate analysis has problems such as "multicollinearity" and is not fully utilized.
Looking at the numerical values on the left and right sides of the two-variable normal equation (formula 4) based on the correlation matrix of multiple regression analysis, apart from the constant b1 on the left side, the two correlation coefficients of the explanatory variables Xi and Xj are used. The right side is one correlation coefficient of R1y. Since each correlation coefficient is (a numerical value including 1 and between 0), the numerical value on the left side is about "twice" the explanatory variable of the numerical value on the right side.
Further, the right side of this matrix is the correlation coefficient between the objective variable Y and the diagonal element of the explanatory variable matrix on the left side, and there are no other variables, and the correlation coefficient value of the diagonal element variable on the left side is 1. .. For this reason, the diagonal elements of each row are set as the "main variables" of the row, and the off-diagonal elements are set as the dependent variables.
In order for the relative ratio of the bi values obtained by regression analysis to be close to the relative ratio of the Ri values of the original correlation coefficient, the row and column matrices corresponding to the number of explanatory variables are "unit matrices". It is desirable that the value is close to.
Since it is expected that there is a numerical overlap on the left side, the diagonal element is used as the reference variable, and the diagonal element reference variable is removed from the explanatory variables of the off-diagonal elements.
The removal residue is important because it indicates the characteristics of the off-diagonal elements, and also indicates the selection of explanatory variables and the suitability of the number of digits. Although this analysis has many derivative problems, it has the important significance of providing appropriate materials for the mathematical analysis of modern socio-economics, which can be said to be information overload.

課題を解決するための手段Means to solve problems

1 基準化変数Aiは、これに一定数を加算、乗算しても元のAiの数値は不変である。基準化式(3)参照
2 ΣA1A2/n=r12 である。
3 Aiの数値のままでの重複分除去は、負数がある場合は計算が複雑になるので、除去計算で全て正数になるように、又、全てのAi数値が統一された数値となるように一定数(本実施例ではAの負数最大値を目途に.1.8)を加算する。
4 左辺(対角要素A1、非対角要素A2)について、A2からA1と同調する部分を下記の通り除去するため、A2とA1の絶対値のどちらか低い数値を限度としてA2から除去し、残余分を求める。
除去例 A1 A2 除去分 残余分
0.3 0.2 0.2 0
0.2 0.5 0.2 0.3
なお、上記3の一定数加算で実施例のA1,A2の数値総和は18と等しくなる。
このことから、次の簡単な例で理解できるように正規方程式は、正方行列で斜線状に位置する対角要素の対称の位置にある非対角要素の数値は等しくなる。
A1、A2の数値総和を等しく0.7とする。
A1が対角要素(除去基準)でA2が除去される非対角要素である場合
A1 A2 除去分 残余分
0.3 0.5 0.3 0.2
0.4 0.2 0.2 0
逆の場合
A2 A1 除去分 残余分
0.5 0.3 0.3 0
0.2 0.4 0.2 0.2
このため正規方程式の相関行列の斜線状の対角要素を基準とした対称の位置にある非対角要素の数値は等しい。
5 除去残余分はA2の一部なので、これをA1*A2(これは重複分除去前の相関係数r12である。数式(4)参照)に乗じ、改めてr12を求めて重複分除去後の新相関行列を作り、重複分除去後のbi(i=1,2)を求める。
1 Even if a certain number of the standardized variable Ai is added or multiplied, the original numerical value of Ai does not change. See standardization formula (3) 2 ΣA1A2 / n = r12.
3 When removing duplicates with Ai values as they are, the calculation becomes complicated when there are negative numbers, so make sure that all Ai values are positive in the removal calculation, and that all Ai values are unified. Add a certain number (1.8 in this example, aiming at the maximum negative number of A).
4 Regarding the left side (diagonal element A1, off-diagonal element A2), in order to remove the part synchronized with A1 from A2 as follows, remove from A2 up to the lower of the absolute values of A2 and A1. Find the remainder.
Removal example A1 A2 Removal residual residue
0.3 0.2 0.2 0
0.2 0.5 0.2 0.3
It should be noted that the sum of the numerical values of A1 and A2 in the embodiment becomes equal to 18 by adding one constant of 3 above.
From this, as can be understood in the following simple example, in the normal equation, the numerical values of the off-diagonal elements at the symmetrical positions of the diagonal elements located diagonally in the square matrix are equal.
Let the sum of the numerical values of A1 and A2 be equal to 0.7.
When A1 is a diagonal element (removal criterion) and A2 is an off-diagonal element to be removed
A1 A2 Removal Residual
0.3 0.5 0.3 0.2
0.4 0.2 0.2 0
In the opposite case
A2 A1 Removal amount Residual
0.5 0.3 0.3 0
0.2 0.4 0.2 0.2
Therefore, the numerical values of the off-diagonal elements at symmetrical positions with respect to the diagonal diagonal elements of the correlation matrix of the normal equation are equal.
5 Since the removal residue is a part of A2, this is multiplied by A1 * A2 (this is the correlation coefficient r12 before removing the duplicates. See formula (4)), and r12 is calculated again to obtain r12 after the duplicates are removed. Create a new correlation matrix and find the bi (i = 1, 2) after removing the duplicates.

発明の効果Effect of the invention

求めたbiの妥当性を確認するため、目的変数Yに対する説明変数の元の最高相関係数を100とした場合のbiの相対比を求めると、元の相関数値に近く,“多重共線性”等が解消されていることが分る。In order to confirm the validity of the obtained bi, the relative ratio of bi when the original maximum correlation coefficient of the explanatory variable with respect to the objective variable Y is set to 100 is close to the original correlation value, and "multicollinearity". It can be seen that such things have been resolved.

実施手順Implementation procedure

1 多重回帰分析の正規方程式の目的変数Y,説明変数Xiを、説明変数の計測単位による変動を抑制するために、また、正規方程式の相関行列を作成するために各変数の基準化を行う。
2 基準化変数の一定数加算.乗算により数値が変化しない特性を利用する。
3 正規方程式の各行の数値ために、また、重複分除去により負数を生じさせないようにするために基準化変数に一定数の加算をする。
4 対角である主力変数を基準とした非対角要素の従属変数の重複分除去は[0012]の計算を適用する。
5 求められた残余の数値は行列の非対角要素である従属変数の一部なので[0012]、改めて正規方程式の非対角要素の相関係数とする新行列式を作る。
6 以上の演算よる正規方程式の新行列から、目的変数Yへの寄与度を示すbiを求める。
7 ここまでの“多重共線性”等解消の演算方法が適切である否かを検証するために、同一事例に対して説明変数を順次追加させた場合の当該方法の妥当性を検証する。これを見ると説明変数4ケの場合に適用した“多重共線性”等の解消の演算方法の結果は、元の目的変数に対する説明変数の相関係数の相対比格差に近く概ね妥当である。
1 The objective variable Y and the explanatory variable Xi of the normal equation of the multiple regression analysis are standardized in order to suppress the fluctuation of the explanatory variable depending on the measurement unit and to create the correlation matrix of the normal equation.
2 Add a constant of standardized variables. Use the characteristic that the numerical value does not change by multiplication.
3 Add a certain number to the reference variable for the numerical value of each line of the normal equation and to prevent the negative number from being generated by the duplication removal.
4 The calculation of [0012] is applied to the deduplication of the dependent variable of the off-diagonal element based on the main variable which is diagonal.
5 Since the obtained residual numerical value is a part of the dependent variable that is the off-diagonal element of the matrix [0012], a new determinant is created again as the correlation coefficient of the off-diagonal element of the normal equation.
6 Find the bi indicating the degree of contribution to the objective variable Y from the new matrix of the normal equations obtained by the above operations.
7 In order to verify whether the calculation method for eliminating "multicollinearity" etc. up to this point is appropriate, the validity of the method when explanatory variables are sequentially added to the same case is verified. Looking at this, the result of the calculation method for eliminating "multicollinearity" and the like applied in the case of four explanatory variables is close to the relative ratio disparity of the correlation coefficient of the explanatory variables with respect to the original objective variable and is generally appropriate.

以下の目的変数Y,説明変数Xh、Xi、Xj、Xk、Xlは適当に設定した数値である。
計算を簡単にするため各変数相互の相関係数を求める。

Figure 2021068456
The following objective variables Y and explanatory variables Xh, Xi, Xj, Xk, and Xl are numerical values set appropriately.
Find the correlation coefficient between each variable to simplify the calculation.
Figure 2021068456

2変数、3変数、4変数の場合
2変数の場合

Figure 2021068456
3変数の場合
Figure 2021068456
4変数の場合
Figure 2021068456
Figure 2021068456
計算結果は採用変数2変数、追加した3変数、4変数の全ての場合で、現行分析方法で求めたbi数値は乱れているが、重複分除去の分析ではいずれの場合もR値の相対比に比較的近い結果得ている。2 variables, 3 variables, 4 variables In case of 2 variables
Figure 2021068456
In the case of 3 variables
Figure 2021068456
In the case of 4 variables
Figure 2021068456
Figure 2021068456
The calculation results are in all cases of 2 variables adopted, 3 variables added, and 4 variables added, and the bi value obtained by the current analysis method is distorted, but in the analysis of duplication removal, the relative ratio of the R value is in each case. The result is relatively close to.

目的変数Yに対する相関係数が全て0.5に近い事例Xj、Xk、Xlの3変数の回帰分析の計算例である。
(Xj、Xkは前掲)

Figure 2021068456
計算結果は現行分析方法のb値は乱れているが、重複分除去の計算ではほぼR値の相対比に近い。Case where the correlation coefficients for the objective variable Y are all close to 0.5 This is a calculation example of regression analysis of three variables Xj, Xk, and Xl.
(Xj and Xk are listed above)
Figure 2021068456
Although the b value of the current analysis method is disturbed in the calculation result, it is close to the relative ratio of the R value in the calculation of duplication removal.

JP2020183948A 2020-10-15 2020-10-15 Calculation technique for eliminating "multicollinearity" or the like in regression analysis, and obtaining partial regression coefficient indicating contribution to proper objective variable of explanatory variable, as management material Pending JP2021068456A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020183948A JP2021068456A (en) 2020-10-15 2020-10-15 Calculation technique for eliminating "multicollinearity" or the like in regression analysis, and obtaining partial regression coefficient indicating contribution to proper objective variable of explanatory variable, as management material

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2020183948A JP2021068456A (en) 2020-10-15 2020-10-15 Calculation technique for eliminating "multicollinearity" or the like in regression analysis, and obtaining partial regression coefficient indicating contribution to proper objective variable of explanatory variable, as management material

Publications (2)

Publication Number Publication Date
JP2021068456A JP2021068456A (en) 2021-04-30
JP2021068456A5 true JP2021068456A5 (en) 2021-07-29

Family

ID=75637388

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2020183948A Pending JP2021068456A (en) 2020-10-15 2020-10-15 Calculation technique for eliminating "multicollinearity" or the like in regression analysis, and obtaining partial regression coefficient indicating contribution to proper objective variable of explanatory variable, as management material

Country Status (1)

Country Link
JP (1) JP2021068456A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115545216B (en) * 2022-10-19 2023-06-30 上海零数众合信息科技有限公司 Service index prediction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US11323484B2 (en) Privilege assurance of enterprise computer network environments
US20210297431A1 (en) Generation of an issue detection evaluation regarding a system aspect of a system
US10192068B2 (en) Reversible redaction and tokenization computing system
US10169731B2 (en) Selecting key performance indicators for anomaly detection analytics
US10749748B2 (en) Ranking health and compliance check findings in a data storage environment
Radziwill et al. Cybersecurity cost of quality: Managing the costs of cybersecurity risk management
CN111417954B (en) Data de-identification based on detection of allowable configurations of data de-identification process
US10366129B2 (en) Data security threat control monitoring system
US20140324519A1 (en) Operational Risk Decision-Making Framework
US20140244343A1 (en) Metric management tool for determining organizational health
Goegebeur et al. A local moment type estimator for the extreme value index in regression with random covariates
Turitsyn et al. Fast algorithm for n-2 contingency problem
Finkelstein et al. Preventive maintenance of multistate systems subject to shocks
JP2021068456A5 (en)
US9904960B2 (en) Identifying defunct nodes in data processing systems
JP2021068456A (en) Calculation technique for eliminating "multicollinearity" or the like in regression analysis, and obtaining partial regression coefficient indicating contribution to proper objective variable of explanatory variable, as management material
CN106713032B (en) A kind of method and device for realizing network management service management
JP6062836B2 (en) Expense apportioning system and method
WO2014196982A1 (en) Identifying log messages
US20170032300A1 (en) Dynamic selection of resources on which an action is performed
JP2008129796A (en) Computer system for estimating credit rating of telephone contractor based on telephone number
JP5970505B2 (en) Recovery curve creation system, recovery curve creation method, and program
Sarada et al. On a random lead time and threshold shock model using phase‐type geometric processes
Mehta et al. Applications of combinatorial testing methods for breakthrough results in software testing
Kao et al. MITC Viz: Visual analytics for man-in-the-cloud threats awareness