CN110928859A - Model monitoring method and device, computer equipment and storage medium - Google Patents

Model monitoring method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110928859A
CN110928859A CN201911172695.0A CN201911172695A CN110928859A CN 110928859 A CN110928859 A CN 110928859A CN 201911172695 A CN201911172695 A CN 201911172695A CN 110928859 A CN110928859 A CN 110928859A
Authority
CN
China
Prior art keywords
model
index
monitoring
sample
monitored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911172695.0A
Other languages
Chinese (zh)
Inventor
陈传栋
刘冰婉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiping Financial Science And Technology Service (shanghai) Co Ltd
Original Assignee
Taiping Financial Science And Technology Service (shanghai) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiping Financial Science And Technology Service (shanghai) Co Ltd filed Critical Taiping Financial Science And Technology Service (shanghai) Co Ltd
Priority to CN201911172695.0A priority Critical patent/CN110928859A/en
Publication of CN110928859A publication Critical patent/CN110928859A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/211Schema design and management
    • G06F16/212Schema design and management with details for data modelling support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating

Abstract

The application relates to a model monitoring method, a model monitoring device, computer equipment and a storage medium. The method comprises the following steps: receiving a model identification and a model monitoring index of a model to be monitored, which are sent by a monitoring terminal, wherein the model monitoring index is an index reflecting the performance of the model to be monitored; judging the model type of the model to be monitored according to the model identification, and acquiring a verification sample set according to the model type, wherein the verification sample set comprises a verification sample and a sample label of the verification sample; inputting the verification sample set into the model to be monitored according to a preset period to obtain a model operation result; comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index; and generating model warning information according to the index monitoring value and the early warning threshold value, and sending the model warning information to the monitoring terminal. The method can be used for monitoring different data models.

Description

Model monitoring method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a model monitoring method and apparatus, a computer device, and a storage medium.
Background
With the progress of big data technology, the use of data models, which refer to models capable of outputting operation results, in business is becoming more and more popular. However, the data model is attenuated, and the ability of the data model to run the result data is reduced as the input data is updated, so that the performance of the data model needs to be monitored in time to maintain and update the model. At present, a method capable of monitoring different data models is lacked.
Disclosure of Invention
In view of the above, it is necessary to provide a model monitoring method, apparatus, computer device and storage medium capable of monitoring different data models.
A method of model monitoring, the method comprising:
receiving a model identification and a model monitoring index of a model to be monitored, which are sent by a monitoring terminal, wherein the model monitoring index is an index reflecting the performance of the model to be monitored;
judging the model type of the model to be monitored according to the model identification, and acquiring a verification sample set according to the model type, wherein the verification sample set comprises a verification sample and a sample label of the verification sample;
inputting the verification sample set into the model to be monitored according to a preset period to obtain a model operation result;
comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index;
and generating model warning information according to the index monitoring value and the early warning threshold value, and sending the model warning information to the monitoring terminal.
In one embodiment, the comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index includes:
comparing the model operation result with the sample label to obtain a confusion matrix of the model to be monitored, wherein the confusion matrix is used for comparing the model operation result with the real information of the sample label;
and generating an index monitoring value corresponding to the model monitoring index according to the confusion matrix.
In one embodiment, the comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index includes:
when the model type is a scoring model, counting model index parameters corresponding to the model monitoring indexes in the model operation result, and counting sample index parameters corresponding to the sample labels and the model monitoring indexes;
and calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter.
In one embodiment, the calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter further includes:
and calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter by adopting normal distribution test or a Gini coefficient.
In one embodiment, the generating model warning information according to the index monitoring value and the warning threshold value includes:
acquiring the accuracy of a model to be monitored and an early warning threshold value of a model monitoring index;
adjusting the early warning threshold value according to the accuracy of the accuracy rate and the accuracy of the early warning threshold value;
and comparing the adjusted early warning threshold value with the index monitoring value to generate model warning information.
In one embodiment, the receiving the model identifier and the model monitoring index of the model to be monitored, which are sent by the monitoring terminal, includes:
receiving a user identifier of a monitoring user sent by a monitoring terminal;
acquiring a preset model monitoring index generation index selection item according to the user identification, and acquiring a preset model identification generation type selection item;
sending the index selection item and the type selection item to the monitoring terminal, and receiving the index selection item and the type selection item fed back by the monitoring terminal;
and obtaining the model identification and the model monitoring index of the model to be monitored according to the index option and the type option.
A model monitoring apparatus, the apparatus comprising:
the monitoring request receiving module is used for receiving a model identifier and a model monitoring index of a model to be monitored, which are sent by a monitoring terminal, wherein the model monitoring index is an index reflecting the performance of the model to be monitored;
the sample set obtaining module is used for judging the model type of the model to be monitored according to the model identification and obtaining a verification sample set according to the model type, wherein the verification sample set comprises a verification sample and a sample label of the verification sample;
the model operation module is used for inputting the verification sample set into the model to be monitored according to a preset period to obtain a model operation result;
the monitoring value comparison generation module is used for comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index;
and the warning module is used for generating model warning information according to the index monitoring value and the early warning threshold value and sending the model warning information to the monitoring terminal.
In one embodiment, the monitoring value comparison generation module includes:
the comparison unit is used for comparing the model operation result with the sample label to obtain a confusion matrix of the model to be monitored, and the confusion matrix is used for comparing the model operation result with the real information of the sample label;
and the monitoring value generating unit is used for generating an index monitoring value corresponding to the model monitoring index according to the confusion matrix.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the above method when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
According to the model monitoring method, the model monitoring device, the computer equipment and the storage medium, the operation result of the model is obtained by inputting the verification sample set into the model to be monitored according to the preset period; comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index; and generating model warning information according to the index monitoring value and the early warning threshold value, realizing real-time reminding of the abnormal condition of the model, conveniently and timely optimizing the model, ensuring high-efficiency and stable operation of the model, and enabling service personnel to check the index through the index monitoring value corresponding to the model monitoring index, objectively knowing the operation condition of the model and simplifying the workload of the model monitoring by the modeling personnel.
Drawings
FIG. 1 is a diagram of an application scenario of a model monitoring method in one embodiment;
FIG. 2 is a schematic flow chart diagram of a model monitoring method in one embodiment;
FIG. 3 is a schematic flow chart diagram illustrating the step of model monitoring in one embodiment;
FIG. 4 is a schematic flow chart of the model monitoring step in another embodiment;
FIG. 5 is a block diagram showing the structure of a model monitoring apparatus according to an embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The model monitoring method provided by the application can be applied to the application environment shown in fig. 1. Wherein the monitoring terminal 102 communicates with the server 104 through a network. The server 104 receives a model identifier and a model monitoring index of the model to be monitored, which are sent by the monitoring terminal 102, wherein the model monitoring index is an index reflecting the performance of the model to be monitored; the server 104 judges the model type of the model to be monitored according to the model identification, and acquires a verification sample set according to the model type, wherein the verification sample set comprises a verification sample and a sample label of the verification sample; the server 104 inputs the verification sample set into the model to be monitored according to a preset period to obtain a model operation result; the server 104 compares the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index; the server 104 generates model warning information according to the index monitoring value and the early warning threshold value, and sends the model warning information to the monitoring terminal. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable smart devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers. The model to be monitored may be stored on the server 104, or may be stored on another terminal or another server communicatively connected to the server 104.
In one embodiment, as shown in fig. 2, a model monitoring method is provided, which is described by taking the application of the method to the server in fig. 1 as an example, and includes the following steps:
step 202, receiving a model identifier and a model monitoring index of a model to be monitored, which are sent by a monitoring terminal, wherein the model monitoring index is an index reflecting the performance of the model to be monitored.
The model monitoring index is an index reflecting the performance of the model to be monitored, and can be selected according to the service requirement. The model monitoring index can be an index related to the performance of the model to be monitored, such as accuracy, precision, false discovery rate, false leakage rate, negative prediction value, recall rate, false positive rate, deletion rate, negative case coverage rate index, F1 value index, promotion degree, AUC value (Area Under customer), mean absolute error, mean square error, square root error, mean absolute value error, decision coefficient, population stability and the like. The model identification corresponds to the model to be monitored, and the business items of the model can be reflected. And the server receives the model identification and the model monitoring index of the model to be monitored, which are sent by the monitoring terminal.
And 204, judging the model type of the model to be monitored according to the model identification, and acquiring a verification sample set according to the model type, wherein the verification sample set comprises a verification sample and a sample label of the verification sample.
The validation sample set includes a validation sample and a sample label for the validation sample. Different sets of validation samples correspond to different models to be monitored. The validation samples in different sets of validation samples may overlap or be completely unrelated. The model identification may contain a string representing the model type and a string representing the model item. The verification sample sets of the same model type may be consistent, and the verification sample sets of different model types may be consistent or may not be consistent. For example, when the models are applied to the life insurance assessment field, and the model types are life insurance scoring models, the verification sample sets of different life insurance scoring models may be consistent; when one model type is a life insurance scoring model and the other model type is a life insurance machine learning model, the verification sample sets may be consistent or inconsistent, but the sample labels in the verification sample sets adopted by different models are completely inconsistent. And the server judges the model type of the model to be monitored according to the model identification and acquires a verification sample set according to the model type.
And step 206, inputting the verification sample set into the model to be monitored according to a preset period to obtain a model operation result.
And the server inputs the verification sample set into the model to be monitored according to a preset period to obtain a model operation result. The preset period may be real-time monitoring, day, week, month, or a specific date. The period of real-time monitoring can be in seconds, minutes, hours. When the model type of the model to be monitored is a machine learning model, a voiceprint recognition algorithm model or a natural language processing model, the server can directly input the verification samples in the verification sample set into the model to be monitored to obtain the model operation result. When the model type of the model to be monitored is a scoring model, the server can input training samples and training sample scores in a training set, validation samples and validation sample scores in a validation sample set and the like to obtain a model operation result.
And 208, comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index.
And the server compares the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index. The server can extract index parameters corresponding to the model monitoring indexes from the model operation results and the sample labels, and calculate index monitoring values corresponding to the model monitoring indexes according to the change values of the index parameters; the server can also compare the model operation result with the sample label to obtain a confusion matrix of the model to be monitored, and generate an index monitoring value corresponding to the model monitoring index according to the confusion matrix.
And step 210, generating model warning information according to the index monitoring value and the early warning threshold value, and sending the model warning information to the monitoring terminal.
The early warning threshold is used to characterize the degree of model decay. One or more pre-alarm thresholds may be stored in the database. When a plurality of thresholds exist, the first early warning threshold is used for representing that the model is attenuated and needs to be optimized; the second warning threshold is used to indicate that the model has crashed and needs to be reconstructed. When the index monitoring value is larger than the first early warning threshold value and smaller than the second early warning threshold value, the server can generate model warning information of 'the model is attenuated and needs to be optimized'; when the index monitoring value is larger than the second early warning threshold value, the server can generate model warning information of 'model crashed and needs to be reconstructed'. And the server generates model warning information according to the index monitoring value and the early warning threshold value and sends the model warning information to the monitoring terminal.
In the model monitoring method, a model operation result is obtained by inputting a verification sample set into a model to be monitored according to a preset period; comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index; and model warning information is generated according to the index monitoring value and the early warning threshold value, so that the abnormal condition of the model is reminded in real time, the model is conveniently and timely optimized, the high-efficiency and stable operation of the model is ensured, and business personnel can check the indexes through the index monitoring value corresponding to the model monitoring index, objectively know the operation condition of the model, and the workload of the model monitoring personnel is simplified.
In one embodiment, comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index includes the following steps:
and 302, comparing the model operation result with the sample label to obtain a confusion matrix of the model to be monitored, wherein the confusion matrix is used for comparing the model operation result with the real information of the sample label.
The confusion matrix is used for comparing the model operation result with the real information of the sample label and can be represented in a matrix form of n rows and n columns. For example, the confusion matrix may be an n × n matrix, each row in the matrix represents the prediction result of the verification sample set, each column represents the true information of the verification sample set, and the cell data in the matrix is the number of test samples of different prediction types. When n is 2, the confusion matrix can be divided into Positive and Negative classes, and the data in the cells can be True (TP), False Positive (FP), False Negative (FN) and True Negative (TN). The true class sample is a sample that the true class is a positive class and the result of model prediction is the positive class; a false positive class sample is a sample in which the true class is a negative class, but the prediction of the model is a positive class; the false negative class sample is a sample of which the true class is a positive class but the prediction of the model is a negative class; the true negative class sample refers to a sample in which the true class is a negative class and the prediction of the model is a negative class. And the server compares the model operation result with the sample label to obtain a confusion matrix of the model to be monitored. For example, the model type may be a machine learning model, a regression model, a voiceprint recognition algorithm model, a natural language processing model.
And 304, generating an index monitoring value corresponding to the model monitoring index according to the confusion matrix.
The server may generate an index monitoring value corresponding to the model monitoring index according to the confusion matrix. Accuracy (Accuracy) is the ratio of the number of correct model predictions to the total number of samples, i.e., the Accuracy of the overall judgment of the classification model (including the overall Accuracy of all classes). The calculation formula is as follows:
Figure BDA0002289150550000071
the Precision (Precision) is a Positive predictive value, and is a proportion of a sample that is actually a Positive type among samples for which the model predicts that the model is a Positive type. The calculation formula is as follows:
Figure BDA0002289150550000072
the False discovery rate (False discovery rate) represents the proportion of samples that are truly negative in the samples that the model predicts as positive. The calculation formula is as follows:
Figure BDA0002289150550000073
the False missing rate (False consistency rate) represents the proportion of the samples that are truly positive in the samples that the model predicts as negative classes, i.e. the proportion of the evaluation model that "misses" out of the positive classes. The calculation formula is as follows:
Figure BDA0002289150550000081
the Negative predictive value (Negative predictive value) is a ratio of the samples that are actually Negative in the samples predicted to be Negative by the model. The calculation formula is as follows:
Figure BDA0002289150550000082
the Recall rate, i.e., True positive rate (Recall), is a ratio of the number of samples that the model predicts as positive class to the total number of positive class samples. The calculation formula is as follows:
Figure BDA0002289150550000083
the False positive rate (Fall-out) is a ratio of the number of samples in the actual negative class to the number of samples in the positive class predicted by the model. The calculation formula is as follows:
Figure BDA0002289150550000084
the False negative class rate (Miss rate) is the ratio of the number of actual positive classes in the samples predicted as negative classes by the model to the number of actual positive classes in the samples of the true positive classes. The calculation formula is as follows:
Figure BDA0002289150550000085
the True negative class rate (negative example coverage rate) is a ratio of samples of which the model is predicted to be a negative class to the number of samples of an actual negative class. The calculation formula is as follows:
Figure BDA0002289150550000086
the F1 value (F1-Score) is a harmonic mean value of the precision rate and the recall rate, and is equivalent to a comprehensive evaluation index of the precision rate and the recall rate. The calculation formula is as follows:
Figure BDA0002289150550000087
lift Value-is how much the prediction capability is improved after the model is applied, and is usually compared with the case without the model (i.e. random case). The denominator is the proportion of the total number of positive samples in the total number of samples counted under the condition of not using the model; the number of positive samples predicted by the model is the number of positive samples predicted by the model after the model prediction is applied by the molecule. The formula is as follows:
Figure BDA0002289150550000088
in another embodiment, the model operation result and the sample label are compared to obtain an index monitoring value corresponding to the model monitoring index, wherein when the model type is a regression algorithm model, the model monitoring index is Mean Absolute Error (MAE), Mean Square Error (MSE), square root error (RMSE), Mean Absolute Percentage Error (MAPE), and Coefficient of Determination (coeffient of Determination).
When the model operation result (predicted value) is
Figure BDA0002289150550000091
Sample label (true value) of y ═ y1,y2,...,yn}, mean:
Figure BDA0002289150550000092
the average absolute error value ranges from 0 to plus infinity and can be presented in fractional form. When the predicted value and the true value are completely matched, the average absolute error (MAE) is equal to 0, namely, a perfect model. The early warning threshold value can be set according to specific requirements.
Figure BDA0002289150550000093
The mean square error ranges from 0 to positive infinity and can occur in the form of a decimal number. When the predicted value and the true value are completely coincided, the Mean Square Error (MSE) is equal to 0, namely a perfect model. The early warning threshold value can be set according to specific requirements.
Figure BDA0002289150550000094
Square root error (RMSE) is MSE plus root sign, and is relatively intuitive in magnitude, for example, when RMSE is 10, it can be considered that the regression effect is 10 difference from the true value on average. The square root error ranges from 0 to positive infinity and can occur in fractional form. The early warning threshold value can be set according to specific requirements.
Figure BDA0002289150550000095
Mean Absolute Percent Error (MAPE), a MAPE of 0% indicates perfect model, and a MAPE greater than 100% indicates poor model quality.
Figure BDA0002289150550000096
The Coefficient of Determination (R2) is the square of the correlation Coefficient. The correlation coefficient is used for describing a linear relation between two variables, but the application range of the decision coefficient is wider, and the decision coefficient can be used for describing a nonlinear relation or a correlation relation with two or more independent variables.
Figure BDA0002289150550000097
When the model type is a voiceprint recognition algorithm model, the model monitoring index may also be an error rejection rate, an error acceptance rate, an error threshold, and the like.
The False Rejection Rate (FRR) is the proportion of False Rejection cases among all matching cases of the same kind. The presence of several unrecognized correct voices in a batch of this total correct list is a false rejection. The false rejection rate is actually the same concept as the false negative class rate, i.e. the missing rate, in the confusion matrix:
Figure BDA0002289150550000101
the False Acceptance Rate (FAR) is the proportion of False Acceptance cases in all heterogeneous matching cases. Several voices with no recognized errors appear in a batch of the list of all errors, and the voice recognition is error acceptance. The false acceptance rate and the false positive rate in the confusion matrix are the same concept:
Figure BDA0002289150550000102
binary classification is generally divided into positive and negative classes, and model prediction results are predicted in a probability value form. Therefore, it is usually necessary to set an error threshold, which ranges from 0 to 1, and is generally 0.5 by default, and when the probability value is greater than the threshold, the binary result is predicted to be 1, and when the probability value is less than the threshold, the binary result is predicted to be 0. Adjusting the threshold may balance FAR and FRR according to traffic demand. When a high threshold is set, the score requirement for the system to make an acceptance decision is stricter, FAR is reduced, and FRR is increased; when the low threshold is set, the score requirement for the system to make acceptance decisions is relaxed, FAR is high, and FRR is low. Under different application scenes, different thresholds are adjusted, and balance between safety and convenience can be achieved.
Equal Error Rate (Equal Error Rate, EER): the threshold is adjusted so that the False Rejection Rate (FRR) equals the False Acceptance Rate (FAR), and the values of FAR and FRR at this time are called equal error rates. The equal error rate chart takes a group of equal difference series between 0 and 1 as the distinguishing limit of the recognition model, namely the coordinate x axis, the coordinate charts of FRR and FAR are drawn, and the intersection point is the EER value.
In one embodiment, as shown in fig. 4, comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index includes the following steps:
step 402, when the model type is a scoring model, counting model index parameters corresponding to model monitoring indexes in the model operation result, and counting sample index parameters corresponding to the sample labels and the model monitoring indexes.
And when the model type is a grading model, the server inputs the scores of the samples in the training set and the verification sample set and the variable distribution of the grading cards of the training set and the verification sample set into the model to be monitored. And the server counts model index parameters corresponding to the model monitoring indexes in the model operation result, and counts sample index parameters corresponding to the sample labels and the model monitoring indexes. And the server counts the model index parameters in the operation results of each model to generate a model score distribution table, and the server also counts the sample index parameters in the sample labels to generate a sample score distribution table. The scoring distribution table is used for counting the number of scoring cases in each scoring group and the ratio of the number of scoring cases to the total number of cases. The scoring distribution table can visually and clearly observe the case number distribution of each scoring group of the training sample and the verification sample. The training samples are samples of a training scoring model.
And 404, calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter.
And the server calculates an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter. The server may calculate group stability, variable stability, and the like. The group stability is used for measuring the difference degree of the scoring card in the proportion of the training set sample and the test set sample. The PSI formula is as follows:
Figure BDA0002289150550000111
the proportion difference of each grading group is the case proportion of the test sample-the case proportion of the training sample;
and (4) the proportion weight is ln (case proportion of the test sample/case proportion of the training sample).
The variable stability is used to analyze the offset caused by the variable. The distribution difference calculation formula of each variable is as follows:
Figure BDA0002289150550000112
the AUC (Area Under measure) is a probability value, which represents the probability that when a positive sample and a negative sample are randomly selected, the Score value calculated according to the current prediction model will rank the positive sample in front of the negative sample, and the probability value is the AUC. The score range of each group can be automatically generated by the monitoring system, for example, the highest score and the lowest score are obtained, and the score range of each group can be obtained by dividing the score range by the grouping number. And counting the number and accumulated number of the scoring cases which are fraud or not in each scoring group. The server calculates the false positive rate of each grading group according to the fraud accumulated number and the total fraud number of each grading group; and calculating the recall rate of each scoring group according to the non-fraud accumulated number and the total non-fraud number of each scoring group. And then calculating the AUC value of the scoring group according to the false positive rate and the recall rate of the scoring group, and adding the AUC values of all the scoring groups, namely the AUC value of the prediction model.
In the model monitoring method, the server judges whether the scoring model is stable or not by comparing the verification sample set with the training set of the scoring model.
In another embodiment, calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter includes the following steps: and calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter by adopting normal distribution test or a Gini coefficient.
And the server calculates an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter by adopting normal distribution test or a Gini coefficient. The server can count the distribution difference of whether the verification samples of all the grading groups are fraudulent cases or not, and generate a grading distribution table of whether the verification samples are fraudulent cases or not. The normal distribution test (K-S value: Kolmogorov-Smirnov value) and the Gini coefficient discrimination index can be used to measure the ability of the scoring card model to judge the condition. When the server adopts the K-S value, after completing a model, the server may equally divide the samples of the verification sample set into M groups, and arrange from left to right in descending order of good sample fraction, where the first group has the largest good sample fraction and the smallest bad sample fraction. And accumulating the ratio of the good and bad samples of the groups to obtain the accumulated ratio corresponding to each group. The accumulative ratio of the good and bad samples changes along with the accumulative ratio of the samples, and when the difference between the good and bad samples is maximum, the K-S value is obtained.
When the server adopts the kini coefficient, the server sorts the sample scores of the verification sample set from high to low, the horizontal axis is the cumulative frequency proportion, the vertical axis is the cumulative bad sample proportion, and the cumulative bad sample proportion is increased along with the increase of the cumulative population proportion. If the score is better distinguishable, a greater proportion of bad samples will be concentrated in lower score intervals, and the entire image will form a concave shape. Therefore, the larger the radian of the Lorentz curve is, the larger the Keyny coefficient is, and the stronger the capability of the model for distinguishing good and bad samples is. The kini coefficient can be calculated from the AUC values: 1/2 Gini + 50% ═ AUC.
The server may also analyze variables that cause the decay of the scoring model using the degree of discrimination of the variables for subsequent adjustment. The discrimination of the variable can be measured by using an information value VOI (value of information), the variable VOI can judge the difference degree of the occupation ratio of good and bad cases presented after a certain variable passes through a period of performance, and if the occupation ratio of good and bad cases of the variable has obvious difference, the variable has good separation capability. The concept of the variable VOI value is similar to that of the group stability index (PSI) in the front-end monitoring report, and whether the two groups have a significant difference is compared. The difference is that the variable VOI is the partition force for measuring the quality of the test set and two case groups, and the calculation formula of the variable VOI is as follows:
Figure BDA0002289150550000131
wherein, the proportion difference of each attribute is the proportion of cases of good case groups-the proportion of cases of bad case groups; the duty ratio weight is ln (case duty ratio of good case group/case duty ratio of bad case group).
In measuring the ability of each attribute in a variable to distinguish good cases from bad cases, the discrimination is usually judged by WOE (weight of evaluation). WOE is the relative gap in the case ratio of the observed attribute, i.e. the possible case ratio of good to bad over time belonging to the attribute. The WOE calculation formula is as follows:
Figure BDA0002289150550000132
the attribute WOE is equal to the above-mentioned proportional weight 100. When the attribute WOE is positive, the occupation ratio of the good case is higher than that of the bad case; when the attribute WOE is negative, the bad case occupation ratio of the attribute is higher than the good case occupation ratio.
The server can also judge whether the grading model is attenuated by adopting the ratio trend of the good case and the bad case of each grading group. The scoring card model with good discriminative power should present the state that the high scoring group has more good cases and the low scoring group has more bad cases, i.e. the probability of good or bad cases increases with the increase of the scoring.
The good-and-bad probability is calculated by dividing the number of good clients by the number of bad clients in each scoring group, and the good-and-bad probability approximately presents a situation that the good-and-bad probability gradually increases along with the increase of the scoring. The quality probabilities of adjacent scoring groups should approximately show a multiple growth change, but the relative relationship change degree of each scoring group is sometimes not easy to find by using the quality probabilities as observation indexes, so the server can use the quality probabilities as another observation index by taking a natural logarithm, namely ln (quality probability).
In the model monitoring method, the server not only judges whether the scoring model is stable or not by comparing the verification sample set with the training set of the scoring model, but also analyzes variables causing the attenuation of the scoring model so as to adjust the variables in the following process.
In one embodiment, generating model alert information based on the indicator monitored value and the early warning threshold comprises the following steps: acquiring the accuracy of a model to be monitored and an early warning threshold value of a model monitoring index; adjusting the early warning threshold value according to the accuracy of the accuracy rate and the accuracy of the early warning threshold value; and comparing the adjusted early warning threshold value with the index monitoring value to generate model warning information.
The server obtains the accuracy of the model to be monitored and the early warning threshold value of the model monitoring index. And the server adjusts the early warning threshold value according to the accuracy of the accuracy rate and the accuracy of the early warning threshold value. For example, when the accuracy of the accuracy rate is 1.00% and the accuracy of the early warning threshold is 10.0%, the server adjusts the accuracy of the early warning threshold to obtain 10.00%. And the server compares the adjusted early warning threshold value with the index monitoring value to generate model warning information.
In another embodiment, receiving a model identifier and a model monitoring index of a model to be monitored, which are sent by a monitoring terminal, includes: receiving a user identifier of a monitoring user sent by a monitoring terminal; acquiring a preset model monitoring index generation index selection item according to a user identifier, and acquiring a preset model identifier generation type selection item; sending the index selection item and the type selection item to a monitoring terminal, and receiving the index selection item and the type selection item fed back by the monitoring terminal; and obtaining the model identification and the model monitoring index of the model to be monitored according to the index selection item and the type selection item.
And the server receives the user identification of the monitoring user sent by the monitoring terminal. The user identification may comprise a user name and password, etc. The preset model monitoring indexes corresponding to different user identifications may be different. And the server acquires a preset model monitoring index generation index selection item according to the user identification and acquires a preset model identification generation type selection item. The server sends the index selection item and the type selection item to the monitoring terminal, and the monitoring terminal displays the index selection item and the type selection item, so that monitoring personnel can select the evaluation index and the model type of the model to be monitored according to requirements. And the server receives the index options and the type options fed back by the monitoring terminal. And the server acquires the model identification and the model monitoring index of the model to be monitored according to the index selection item and the type selection item.
It should be understood that although the various steps in the flow charts of fig. 2-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a model monitoring apparatus including: a monitoring request receiving module 502, a sample set obtaining module 504, a model operating module 506, a monitoring value comparison generating module 508 and an alarm module 510, wherein:
the monitoring request receiving module 502 is configured to receive a model identifier and a model monitoring index of a model to be monitored, where the model monitoring index is an index reflecting performance of the model to be monitored, and the model identifier and the model monitoring index are sent by a monitoring terminal.
And a sample set obtaining module 504, configured to determine a model type of the model to be monitored according to the model identifier, and obtain a verification sample set according to the model type, where the verification sample set includes a verification sample and a sample label of the verification sample.
And the model operation module 506 is configured to input the verification sample set into the model to be monitored according to a preset period, so as to obtain a model operation result.
And a monitoring value comparison generation module 508, configured to compare the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index.
And the warning module 510 is configured to generate model warning information according to the index monitoring value and the early warning threshold, and send the model warning information to the monitoring terminal.
In one embodiment, the monitoring value comparison generation module includes a comparison unit and a monitoring value generation unit, wherein:
and the comparison unit is used for comparing the model operation result with the sample label to obtain a confusion matrix of the model to be monitored, and the confusion matrix is used for comparing the model operation result with the real information of the sample label.
And the monitoring value generating unit is used for generating an index monitoring value corresponding to the model monitoring index according to the confusion matrix.
In some embodiments, the monitoring value comparison generation module includes a statistic unit and a monitoring value calculation unit, wherein:
and the statistical unit is used for counting model index parameters corresponding to the model monitoring indexes in the model operation result and counting sample index parameters corresponding to the sample labels and the model monitoring indexes when the model type is the scoring model.
And the monitoring value calculating unit is used for calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter.
In one embodiment, the monitoring value comparison generation module includes a monitoring value calculation unit, wherein:
and the monitoring value calculating unit is used for calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter by adopting normal distribution test or a kini coefficient.
In some embodiments, the alert module includes a threshold obtaining unit, a threshold adjusting unit, and a comparing unit, wherein:
and the threshold value obtaining unit is used for obtaining the accuracy of the model to be monitored and the early warning threshold value of the model monitoring index.
And the threshold adjusting unit is used for adjusting the early warning threshold according to the accuracy of the accuracy rate and the accuracy of the early warning threshold.
And the comparison unit is used for comparing the adjusted early warning threshold value with the index monitoring value to generate model warning information.
In one embodiment, the monitoring request receiving module includes an identifier receiving unit, an option generating unit, a selected item receiving unit, and an identifier index obtaining unit, where:
and the identification receiving unit is used for receiving the user identification of the monitoring user sent by the monitoring terminal.
And the option generating unit is used for acquiring a preset model monitoring index generation index selection item according to the user identifier and acquiring a preset model identifier generation type selection item.
And the selected item receiving unit is used for sending the index selected item and the type selected item to the monitoring terminal and receiving the index selected item and the type selected item fed back by the monitoring terminal.
And the identification index acquisition unit is used for acquiring the model identification and the model monitoring index of the model to be monitored according to the index selection item and the type selection item.
For specific definition of the model monitoring device, reference may be made to the above definition of the model monitoring method, which is not described herein again. The modules in the model monitoring device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing model monitoring data, validation sample sets and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a model monitoring method.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: receiving a model identification and a model monitoring index of a model to be monitored, which are sent by a monitoring terminal, wherein the model monitoring index is an index reflecting the performance of the model to be monitored; judging the model type of the model to be monitored according to the model identification, and acquiring a verification sample set according to the model type, wherein the verification sample set comprises a verification sample and a sample label of the verification sample; inputting the verification sample set into a model to be monitored according to a preset period to obtain a model operation result; comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index; and generating model warning information according to the index monitoring value and the early warning threshold value, and sending the model warning information to the monitoring terminal.
In one embodiment, comparing the model operation result and the sample label, which is implemented when the processor executes the computer program, to obtain an index monitoring value corresponding to the model monitoring index includes: comparing the model operation result with the sample label to obtain a confusion matrix of the model to be monitored, wherein the confusion matrix is used for comparing the model operation result with the real information of the sample label; and generating an index monitoring value corresponding to the model monitoring index according to the confusion matrix.
In one embodiment, comparing the model operation result and the sample label, which is implemented when the processor executes the computer program, to obtain an index monitoring value corresponding to the model monitoring index includes: when the model type is a grading model, counting model index parameters corresponding to model monitoring indexes in the model operation result, and counting sample index parameters corresponding to sample labels and the model monitoring indexes; and calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter.
In one embodiment, the calculation of the index monitoring value corresponding to the model monitoring index from the model index parameter and the sample index parameter, which is performed when the processor executes the computer program, includes: and calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter by adopting normal distribution test or a Gini coefficient.
In one embodiment, the generating of the model alert information based on the indicator monitored value and the early warning threshold, as implemented by the processor executing the computer program, comprises: acquiring the accuracy of a model to be monitored and an early warning threshold value of a model monitoring index; adjusting the early warning threshold value according to the accuracy of the accuracy rate and the accuracy of the early warning threshold value; and comparing the adjusted early warning threshold value with the index monitoring value to generate model warning information.
In one embodiment, the receiving of the model identifier and the model monitoring index of the model to be monitored, which are sent by the monitoring terminal when the processor executes the computer program, includes: receiving a user identifier of a monitoring user sent by a monitoring terminal; acquiring a preset model monitoring index generation index selection item according to a user identifier, and acquiring a preset model identifier generation type selection item; sending the index selection item and the type selection item to a monitoring terminal, and receiving the index selection item and the type selection item fed back by the monitoring terminal; and obtaining the model identification and the model monitoring index of the model to be monitored according to the index selection item and the type selection item.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: receiving a model identification and a model monitoring index of a model to be monitored, which are sent by a monitoring terminal, wherein the model monitoring index is an index reflecting the performance of the model to be monitored; judging the model type of the model to be monitored according to the model identification, and acquiring a verification sample set according to the model type, wherein the verification sample set comprises a verification sample and a sample label of the verification sample; inputting the verification sample set into a model to be monitored according to a preset period to obtain a model operation result; comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index; and generating model warning information according to the index monitoring value and the early warning threshold value, and sending the model warning information to the monitoring terminal.
In one embodiment, comparing the model operation result and the sample label, which is realized when the computer program is executed by the processor, to obtain an index monitoring value corresponding to the model monitoring index includes: comparing the model operation result with the sample label to obtain a confusion matrix of the model to be monitored, wherein the confusion matrix is used for comparing the model operation result with the real information of the sample label; and generating an index monitoring value corresponding to the model monitoring index according to the confusion matrix.
In one embodiment, comparing the model operation result and the sample label, which is realized when the computer program is executed by the processor, to obtain an index monitoring value corresponding to the model monitoring index includes: when the model type is a grading model, counting model index parameters corresponding to model monitoring indexes in the model operation result, and counting sample index parameters corresponding to sample labels and the model monitoring indexes; and calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter.
In one embodiment, a computer program that when executed by a processor implements computing an indicator monitor value corresponding to a model monitor indicator from a model indicator parameter and a sample indicator parameter, comprising: and calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter by adopting normal distribution test or a Gini coefficient.
In one embodiment, the computer program when executed by the processor implements generating model alert information based on the indicator monitored value and the early warning threshold, comprising: acquiring the accuracy of a model to be monitored and an early warning threshold value of a model monitoring index; adjusting the early warning threshold value according to the accuracy of the accuracy rate and the accuracy of the early warning threshold value; and comparing the adjusted early warning threshold value with the index monitoring value to generate model warning information.
In one embodiment, the receiving of the model identifier and the model monitoring index of the model to be monitored, which are sent by the monitoring terminal and are implemented when the computer program is executed by the processor, includes: receiving a user identifier of a monitoring user sent by a monitoring terminal; acquiring a preset model monitoring index generation index selection item according to a user identifier, and acquiring a preset model identifier generation type selection item; sending the index selection item and the type selection item to a monitoring terminal, and receiving the index selection item and the type selection item fed back by the monitoring terminal; and obtaining the model identification and the model monitoring index of the model to be monitored according to the index selection item and the type selection item.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of model monitoring, the method comprising:
receiving a model identification and a model monitoring index of a model to be monitored, which are sent by a monitoring terminal, wherein the model monitoring index is an index reflecting the performance of the model to be monitored;
judging the model type of the model to be monitored according to the model identification, and acquiring a verification sample set according to the model type, wherein the verification sample set comprises a verification sample and a sample label of the verification sample;
inputting the verification sample set into the model to be monitored according to a preset period to obtain a model operation result;
comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index;
and generating model warning information according to the index monitoring value and the early warning threshold value, and sending the model warning information to the monitoring terminal.
2. The method of claim 1, wherein the comparing the model operation result and the sample label to obtain an index monitoring value corresponding to the model monitoring index comprises:
comparing the model operation result with the sample label to obtain a confusion matrix of the model to be monitored, wherein the confusion matrix is used for comparing the model operation result with the real information of the sample label;
and generating an index monitoring value corresponding to the model monitoring index according to the confusion matrix.
3. The method of claim 1, wherein the comparing the model operation result and the sample label to obtain an index monitoring value corresponding to the model monitoring index comprises:
when the model type is a scoring model, counting model index parameters corresponding to the model monitoring indexes in the model operation result, and counting sample index parameters corresponding to the sample labels and the model monitoring indexes;
and calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter.
4. The method of claim 3, wherein calculating an index monitoring value corresponding to the model monitoring index from the model index parameter and the sample index parameter comprises:
and calculating an index monitoring value corresponding to the model monitoring index according to the model index parameter and the sample index parameter by adopting normal distribution test or a Gini coefficient.
5. The method of claim 1, wherein generating model alert information based on the indicator monitor value and an early warning threshold comprises:
acquiring the accuracy of a model to be monitored and an early warning threshold value of a model monitoring index;
adjusting the early warning threshold value according to the accuracy of the accuracy rate and the accuracy of the early warning threshold value;
and comparing the adjusted early warning threshold value with the index monitoring value to generate model warning information.
6. The method according to claim 1, wherein the receiving the model identifier and the model monitoring index of the model to be monitored, which are sent by the monitoring terminal, comprises:
receiving a user identifier of a monitoring user sent by a monitoring terminal;
acquiring a preset model monitoring index generation index selection item according to the user identification, and acquiring a preset model identification generation type selection item;
sending the index selection item and the type selection item to the monitoring terminal, and receiving the index selection item and the type selection item fed back by the monitoring terminal;
and obtaining the model identification and the model monitoring index of the model to be monitored according to the index option and the type option.
7. A model monitoring apparatus, the apparatus comprising:
the monitoring request receiving module is used for receiving a model identifier and a model monitoring index of a model to be monitored, which are sent by a monitoring terminal, wherein the model monitoring index is an index reflecting the performance of the model to be monitored;
the sample set obtaining module is used for judging the model type of the model to be monitored according to the model identification and obtaining a verification sample set according to the model type, wherein the verification sample set comprises a verification sample and a sample label of the verification sample;
the model operation module is used for inputting the verification sample set into the model to be monitored according to a preset period to obtain a model operation result;
the monitoring value comparison generation module is used for comparing the model operation result with the sample label to obtain an index monitoring value corresponding to the model monitoring index;
and the warning module is used for generating model warning information according to the index monitoring value and the early warning threshold value and sending the model warning information to the monitoring terminal.
8. The apparatus of claim 7, wherein the monitor value comparison generation module comprises:
the comparison unit is used for comparing the model operation result with the sample label to obtain a confusion matrix of the model to be monitored, and the confusion matrix is used for comparing the model operation result with the real information of the sample label;
and the monitoring value generating unit is used for generating an index monitoring value corresponding to the model monitoring index according to the confusion matrix.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN201911172695.0A 2019-11-26 2019-11-26 Model monitoring method and device, computer equipment and storage medium Pending CN110928859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911172695.0A CN110928859A (en) 2019-11-26 2019-11-26 Model monitoring method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911172695.0A CN110928859A (en) 2019-11-26 2019-11-26 Model monitoring method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110928859A true CN110928859A (en) 2020-03-27

Family

ID=69851905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911172695.0A Pending CN110928859A (en) 2019-11-26 2019-11-26 Model monitoring method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110928859A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461581A (en) * 2020-05-17 2020-07-28 商志营 Intelligent early warning management system and implementation method
CN111625437A (en) * 2020-05-27 2020-09-04 北京互金新融科技有限公司 Monitoring method and device of wind control model
CN114461502A (en) * 2022-02-16 2022-05-10 北京百度网讯科技有限公司 Model monitoring method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130024166A1 (en) * 2011-07-19 2013-01-24 Smartsignal Corporation Monitoring System Using Kernel Regression Modeling with Pattern Sequences
CN108109066A (en) * 2017-12-11 2018-06-01 上海前隆信息科技有限公司 A kind of credit scoring model update method and system
CN109801151A (en) * 2019-01-07 2019-05-24 平安科技(深圳)有限公司 Financial fraud risk monitoring and control method, apparatus, computer equipment and storage medium
CN110262939A (en) * 2019-05-14 2019-09-20 苏宁金融服务(上海)有限公司 Algorithm model operation and monitoring method, device, computer equipment and storage medium
CN110489314A (en) * 2019-07-05 2019-11-22 中国平安人寿保险股份有限公司 Model method for detecting abnormality, device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130024166A1 (en) * 2011-07-19 2013-01-24 Smartsignal Corporation Monitoring System Using Kernel Regression Modeling with Pattern Sequences
CN108109066A (en) * 2017-12-11 2018-06-01 上海前隆信息科技有限公司 A kind of credit scoring model update method and system
CN109801151A (en) * 2019-01-07 2019-05-24 平安科技(深圳)有限公司 Financial fraud risk monitoring and control method, apparatus, computer equipment and storage medium
CN110262939A (en) * 2019-05-14 2019-09-20 苏宁金融服务(上海)有限公司 Algorithm model operation and monitoring method, device, computer equipment and storage medium
CN110489314A (en) * 2019-07-05 2019-11-22 中国平安人寿保险股份有限公司 Model method for detecting abnormality, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PEIYANG: "模型变量选择方法-IV值WOE", 《简书HTTPS://WWW.JIANSHU.COM/P/3A7CB26CA268》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461581A (en) * 2020-05-17 2020-07-28 商志营 Intelligent early warning management system and implementation method
CN111625437A (en) * 2020-05-27 2020-09-04 北京互金新融科技有限公司 Monitoring method and device of wind control model
CN111625437B (en) * 2020-05-27 2024-01-05 北京互金新融科技有限公司 Monitoring method and device for wind control model
CN114461502A (en) * 2022-02-16 2022-05-10 北京百度网讯科技有限公司 Model monitoring method and device
WO2023155378A1 (en) * 2022-02-16 2023-08-24 北京百度网讯科技有限公司 Method and apparatus for monitoring model
CN114461502B (en) * 2022-02-16 2023-11-14 北京百度网讯科技有限公司 Model monitoring method and device

Similar Documents

Publication Publication Date Title
CN106951984B (en) Dynamic analysis and prediction method and device for system health degree
CN109858737B (en) Grading model adjustment method and device based on model deployment and computer equipment
CN111177714B (en) Abnormal behavior detection method and device, computer equipment and storage medium
CN110489314B (en) Model anomaly detection method and device, computer equipment and storage medium
CN110928859A (en) Model monitoring method and device, computer equipment and storage medium
CN111314173B (en) Monitoring information abnormity positioning method and device, computer equipment and storage medium
EP3475911A1 (en) Life insurance system with fully automated underwriting process for real-time underwriting and risk adjustment, and corresponding method thereof
CN110888625B (en) Method for controlling code quality based on demand change and project risk
CN113627566A (en) Early warning method and device for phishing and computer equipment
CN112435126B (en) Account identification method and device, computer equipment and storage medium
CN110717650A (en) Receipt data processing method and device, computer equipment and storage medium
CN115865649B (en) Intelligent operation and maintenance management control method, system and storage medium
CN111144738A (en) Information processing method, information processing device, computer equipment and storage medium
CN113760670A (en) Cable joint abnormity early warning method and device, electronic equipment and storage medium
CN114997607A (en) Anomaly assessment early warning method and system based on engineering detection data
CN111767192B (en) Business data detection method, device, equipment and medium based on artificial intelligence
CN115409395A (en) Quality acceptance inspection method and system for hydraulic construction engineering
CN116915710A (en) Traffic early warning method, device, equipment and readable storage medium
CN111091276A (en) Enterprise risk scoring method and device, computer equipment and storage medium
CN108764290B (en) Method and device for determining cause of model transaction and electronic equipment
CN110765351A (en) Target user identification method and device, computer equipment and storage medium
CN114202256A (en) Architecture upgrading early warning method and device, intelligent terminal and readable storage medium
CN114139931A (en) Enterprise data evaluation method and device, computer equipment and storage medium
CN110995506B (en) Alarm quantity abnormity positioning method and device, storage medium and computer equipment
CN111724009A (en) Risk assessment method, wind control system and risk assessment equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327

RJ01 Rejection of invention patent application after publication