AU2019101158A4 - A method of analyzing customer churn of credit cards by using logistics regression - Google Patents
A method of analyzing customer churn of credit cards by using logistics regression Download PDFInfo
- Publication number
- AU2019101158A4 AU2019101158A4 AU2019101158A AU2019101158A AU2019101158A4 AU 2019101158 A4 AU2019101158 A4 AU 2019101158A4 AU 2019101158 A AU2019101158 A AU 2019101158A AU 2019101158 A AU2019101158 A AU 2019101158A AU 2019101158 A4 AU2019101158 A4 AU 2019101158A4
- Authority
- AU
- Australia
- Prior art keywords
- data
- model
- logistic regression
- chum
- sep
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/03—Credit; Loans; Processing thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
- G06Q30/0202—Market predictions or forecasting for commercial activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Abstract
In this invention, one particular shrewd classification algorithm-logistic regression model-has been utilized to analyze and predict a plausibly immaculate possibility of customer chum, i.e. the loss of retail bank corporations' previous clients. Due to one obvious reason-the chum analysis is largely dependent on the definition of customer chum. This template collects its data from the database of a major Chinese bank and four categorical variables: customer information, card information, risk information, and transaction activity have thus been examined. Nonetheless logistic regression is rather sensitive to the multicollinearity of independent variables, it effectively gauges the dichotomous issue. In addition to the glittering trait, logistic regression has also been empowered to incorporate new data. The data collection he normalize Data .Labeling preprocessing b t. Logistic regression model . .rinn L2 regulaization Changing parameters Calusion manx F1 score Models testing Accuracy Recall Precision Selecting the best model Figure 1
Description
ABSTRACT
In this invention, one particular shrewd classification algorithm—logistic regression model—has been utilized to analyze and predict a plausibly immaculate possibility of customer chum, i.e. the loss of retail bank corporations’ previous clients. Due to one obvious reason—the chum analysis is largely dependent on the definition of customer chum. This template collects its data from the database of a major Chinese bank and four categorical variables: customer information, card information, risk information, and transaction activity have thus been examined. Nonetheless logistic regression is rather sensitive to the multicollinearity of independent variables, it effectively gauges the dichotomous issue. In addition to the glittering trait, logistic regression has also been empowered to incorporate new data.
i
2019101158 30 Sep 2019
Selecting the best model
Figure 1
2019101158 30 Sep 2019
Title
A method of analyzing customer chum of credit cards by using logistics regression
FIELD OF THE INVENTION
One useful algorithm—logistic regression model—is deployed to forecast the possibility of customer chum.
BACKGROUND OF THE INVENTION
Data mining is designed to ascertain anomalies, patterns, and correlations within multitude data sets to predict outcomes. With respect to the application of data mining in banking, the specific algorithm helps banking ventures comprehend their customer base as well as trillions of transactions at the heart of financial system. This invention demonstrates the process of modeling a logistic template, which is a brilliant approach to predict the customer chum. Accordingly, customer chum predictions will provide essential assistance when it comes to targeting more specific groups of correspondents.
As an immutable fact, the significance of CRM (Customer Relationship Management) needs no more underlining. Due to the startling status quo— the blossom of newly established finance services alongside the gradually impeccable restriction of the law enforcement, assessment of the future requirements of clients has become an exigent task for bank marketing employers. Bettering the customer retention rate and lowering
2019101158 30 Sep 2019 customer chum are two pivotal components contributing to an excellent CRM manager; more importantly, soothing the burden of competitors and offering a vigorous hand on customer relationship management. Through this experiment, corporations have access to the satisfaction and desire of their subscribers.
One incentive of retaining clients is that it costs merely one fifth of the for corporations to maintain existing customers than absorb new ones. Furthermore, laying more emphasis on CRM entitled these ventures to have easier access to build a long-term and stable relationship with their correspondents.
In this digital era, it is common for companies to have electronic database. Analyzing the electronic data can assist companies to make a favorable market decision. In particular, the bank companies can apply a prediction model based on credit card information of public to reduce their clients chum.
In the monumental year of 1979, a Chinese retail banking magnate commissioned the very first credit card in Chinese mainland, unveiling the first chapter of credit card business as well as marking the inception of the perfection in financial domain. After decades of development, credit card has become a major instmment of payment in Chinese society. Currently, there are more than 132 million credit card accounts in China.
2019101158 30 Sep 2019
Nevertheless, the number of people who use credit card is far below, which revels one person actually owns several cards. These people have been empowered to choose between banks to accomplish the preferential service. As a result, issuing banks will suffer from the loss of their profit. It directly triggers the soaring of operating costs and the surge in operation difficulty. Under such circumstance, it is vital for bank employees to detect the high-stake transaction and take contingency against this scenario. Consequently, the loss of banks will be greatly alleviated.
This invention will exhibit an intact procedure of analyzing data from a Chinese bank. Instead of introducing a new data mining algorithm, the main objective of this dissertation is a forecast of churning customers based on a logistic regression model.
After the introduction, the invention is composed by three segments. Initial sector is organized by a brief summary of process of predicting the probability of credit card chum. Secondly, a series of diagrams and charts have been pasted according to the procedure of data analysis. The last segment is comprised by the combination of tools exerted in the analysis and detailed modeling process.
SUMMARY OF THE INVENTION
To identify and predict the probability of credit card chum, and solve the shortcoming and deficiencies in the existing system, this invention
2019101158 30 Sep 2019 proposes a model for credit card chum forecasting based on deep learning. Thus, the database from this specific bank is an immaculate token. Using a group of data from that specific bank, which includes 4502 individuals and 135 features, these data give full play to the advantages of our model in deep learning. Besides, all the features are independent and unrelated. By using the L2 regularization, this invention improves the training times significantly and overcomes some of the technical difficulties if training process such as overfitting.
This invention utilizes the logistic regression to train the model, employees Receiver Operating Characteristic Curve (ROC) and Area Under Curve (AUC), recall rate, provision rate and accuracy to test the model.
Firstly, we import our data into Jupiter. Because the text data there involve a lot of punctuation, noise, etc., and cannot be directly used for analysis, we import data clearing technology to eliminate punctuations and gibberish. Therefore, the degree of fitting of our database is improved. Secondly, the text representation is employed for labeling, in other words, changing data information into digital data (replacing “n” and “p” into “-1” and “1”). Moreover, we utilize MinMaxScaler of preprocessing to normalize our data into a range between “-1” and “1”. After normalizing our data, we divide our database into the training set and the test set, which has the proportion of 7 to 3. Furthermore, we train our data by
2019101158 30 Sep 2019 importing the program packages from Scikit-leam and employing logistic regression. We originally have one parameter, for the seek of accuracy, we change our parameter into eight. Then, we obtain eight models from logistic regression.
Furthermore, we determine that we have 440 positive samples and 4062 negative samples through the use of confusion matrix, which means that our data set is unbalanced. Because of this, we exert Receiver Operating Characteristic Curve (ROC) and Area Under Curve (AUC) for testing the model. The AUC - ROC curve is a performance measurement of classification problems under various threshold Settings. ROC is the probability curve, and AUC represents the degree or measure of separability. By adding more parameters, we acquire more results from AUC so that we could evaluate the fitting degree well. In addition, we utilize recall, precision and accuracy. Recall is the ratio between the number of relevant documents retrieved and the number of all relevant
TP documents in the document library. ----- Precision is the ratio between J TP+FN the number of relevant documents retrieved and the total number of TP documents retrieved. Tp+Fp The accuracy is the number of correctly
TP+TN divided by the number of samples. ------------ Then the Fl-score is
J r TP+TN+FN+FP calculated. The Fl-score is the average value of recall and precision.
* R p+R After evaluating data by three standards, the result presents that our test set and real set fit well.
2019101158 30 Sep 2019
DESCRIPTION OF DRAWING
Figure 1 shows the flow chart
Figure 2 shows the Schematic diagram
Figure 3 shows the confusion matrix of the best suitable model
Figure 4 shows Fl-score plot
Figure 5 shows the AUC of the best suitable model Figure 6 shows the ROC of the best suitable model DESCRIPTION OF PREFERRED EMBODIMENT
1. Data Acquisition
The data, from an anonymous commercial bank, had 135 dependent variables, including age, gender, occupation and so on, used to predict credit card losses, the independent variables. The bank updates the data of the sample in real time through the information when the user registers and the data record when the user USES the credit card every time, so as to ensure the reliability and timeliness of the sample. To facilitate data analysis, we transformed the data into a matrix with 136 rows and 4503 columns. Each row represents a sample, each column represents a dependent variable, and the last column represents the independent variable, namely whether the credit card is lost or not. We replace yes with 1, and no with 0, in order to unify the magnitude with the dependent variable in the next step.
2. Data Preprocessing
2019101158 30 Sep 2019
Data preprocessing is a data mining approach that consists transforming raw data into a format that can be processed by computer. As the actual data is highly likely to contain a formidable of errors, this technique can effectively resolve such issue.
(1) The normalized
Because the size of the original data is not uniform, the proportion of the dependent variable will not be an order of magnitude in the calculation process. In order to reduce the computational burden and facilitate the analysis of the influence of every dependent variable upon the results, we normalized them by using the Min — Max Scaling method in the Scikit-leam package. The following figure shows the principle of this normalization method. Z represents the normalized result, Xi represents the original dependent variable, min (Xi) and Max (Xi) represent the minimum value and maximum value of this variable type respectively.
_ Xj - min(xf) max(Xj) — min(Xj)
After normalization, depending on the size of the original data and the maximum and minimum values of such data, the dependent variables with different sizes will be uniformly standardized to be a number from 0 to 1. In this way, the computational complexity can be simplified while the accuracy of the data remains relatively unchanged.
(2) Partitioned data set
2019101158 30 Sep 2019
We divide the data into two sets, the train set and the test set. The test size contains 30 percent of the data and train set contain 70 percent of the train set.
3. Training
Any model which is not pure white-box contains various parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting.
(1) Regularization
Because we only have 3150 samples, in order to prevent the over-fitting of calculation results, the model cannot be applied to other data, we use 12 regularization to avoid this phenomenon by utilizing the package in the Scikit-leam. L2 regularization reduces the weight of a function by adding regular terms to each dependent variable. The following figure is the formula of regularization term. W represents the regularized parameter and a represents the regularization coefficient, i.e. the intensity of regularization. We square and add the elements of w, and then we take the square root. That's the initial regularization.
a|W|22
2019101158 30 Sep 2019
As the regularization coefficient is larger, the model is less easy to over-fit, and the influence on the final result is different. In order to find the most suitable model in the following steps, we changed the regularization coefficient and gave 0.003, 0.03, 0.3, 3, 30, 300, 3000, 30000 respectively. We compared the models produced in these six cases and evaluated them in following step to find the best results.
(2) Logistic Regression
Logistic regression, also known as logarithmic probability regression, is a classification algorithm. By logistic regression, we marked the size of 135 variables between 1 and 0 and placed them on a continuous function, namely the sigmoid function. In this way, the method can easily process data types with a large number of variables and intuitively present them through confusion matrix, ROC, and AUC images.
By obtaining the package of Scikit-leam, we first generated a fitting curve and put the curve into the sigmoid function. Since sigmoid function is infinitely close to 0 and 1 on both sides of x=0.5 as the dividing line, our 3150 samples will be classified on both sides of the function for classification.
As can be seen from figure 2, we can establish the cost function based on this. The cost function is also called the objective function. Because the sigmoid function also represents the probability of the
2019101158 30 Sep 2019 result taking 1, we can get the probability value of each sample. So, the cost function is set up. We can get the best classification model by maximizing the probability of the sample. The cost function can be transformed by log to reduce the computational complexity. At the same time, because the cost function is convex, we can find the maximum likelihood of the function quickly by gradient descent method. At this point, we can get the weights of the dependent variable parameters.
After getting the weights of each dependent variable, the model has been established. By using the test set to evaluate the model, we can get confusion matrix which is shown in the figure 3, AUC, ROC to determine whether the model is good or bad. Because we used eight parameters in the regular term, we can compare them in the following steps to get the most appropriate model.
4. Optimizing
A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation.
(1) Confusion matrix
After we have two sets of data, we begin to determine the favorable parameters for the Logistic model. In order to obtain a relatively precise io
2019101158 30 Sep 2019 factor, we change the coefficient of the regularization, which has mentioned in previous steps. Then, we apply confusion matrix to check the accuracy of the parameter by using the packages of the Scikit-leam. In this type of matrix, we have four blanks. From left to right and from first to second line, it separately means True Positive, False Positive, False Negative, and True Negative. Specifically, True Positive means the model’s prediction and actual answer are both positive, and True Negative means the model’s prediction and actual answer are both negative. These two factors represent the correct prediction of the model, so we would like to make them as large as possible. To this end, we make confusion matrixes for each Fogistic model with their parameter. Then, we find out the matrix with biggest True Positive and True Negative so that we can use the parameter of this model for the future predication, which has shown in Table 1 with the coefficient of 30.
Table 1 The Evaluation Index
precision | recall | Fl score | AUC | |
0.003 | 0.00 | 0.00 | 0.00 | 0.862564134651483 |
0.03 | 0.84 | 0.20 | 0.32 | 0.9205981729445626 |
0.3 | 0.76 | 0.47 | 0.58 | 0.945682642973345 |
3 | 0.78 | 0.58 | 0.67 | 0.9555437367037918 |
30 | 0.75 | 0.60 | 0.66 | 0.9574083343761732 |
300 | 0.72 | 0.58 | 0.64 | 0.9559379301714428 |
3000 | 0.72 | 0.58 | 0.64 | 0.9527218120385433 |
30000 | 0.73 | 0.59 | 0.65 | 0.9502002252534101 |
(2)AUC and ROC
2019101158 30 Sep 2019
AUC is the area enclosed under the ROC curve and the coordinate axis of the image, which is a measure of the how good a model is. Besides, the AUC is calculated by dividing the real case by the sum of the real case and the false counter case to get the real interest rate as the y-coordinate, and dividing the false positive case by the sum of the false positive case and the true negative case to get the false positive interest rate, which is the x-coordinate.
TP FP TPR ~ TP+ FN FPR ~ TN + FP
After we establishing the coordinate system and calculating the data by texting different parameters,8 parameters which included 0.003,0.03,0.3,3,30,300,3000,30000) we find that the best parameter is 30, which leads to the largest AUC value 0.9574083343761732. (the rest of the results are 0.862564134651483 , 0.9205981729445626 , 0.945682642973345 , 0.9555437367037918 , 0.9559379301714428 , 0.9527218120385433 , 0.9502002252534101) Furthermore, we drew a line graph to find the AUC's largest AUC parameter, and finally used this parameter to draw the ROC curve.
Table 2 shows the report of the best model after training.
Table 2 the report of the best model after training
2019101158 30 Sep 2019
precision | recall | fl-sco re | support | |
0 | 0.96 | 0.98 | 0.97 | 1220 |
1 | 0.75 | 0.60 | 0.66 | 131 |
micro avg | 0.94 | 0.94 | 0.94 | 1351 |
macro avg | 0.85 | 0.79 | 0.82 | 1351 |
weighted avg | 0.94 | 0.94 | 0.94 | 1351 |
(3) Fl-score
We make the use of Fl-score to evaluate the models at the mean time of using AUC. Fl-score is the average value of precision and
4= P # P recall-----. It is the more direct view of either the model is fit enough or not. In order to seek out the best model, we employ eight parameters: 0.003, 0.03, 0.3, 3, 30, 300, 3000, 30000. These figures have tenfold differences between each one of them. According to these parameters, we got eight Fl-Scores shown in the Table 1: 0.00, 0.32, 0.58, 0.67, 0.66, 0.64, 0.64, 0.65.
Then, we graph a line chart so that the trend of F1 -score could be seen clearly in Figure 4.
We see the first 4 Fl-scores into a curve upward trend, and with the increase of our parameters gradually, we see a steady trend of high Fl-scores. This graph shows that the bigger parameters like 3, 30, 300, 3000, 30000 give the best Fl-scores. The higher Fl-scores is, the higher degree fit. Taken in this sense, we obtain the good fit of our model by using the last five parameters.
2019101158 30 Sep 2019
Claims (2)
1. A method of analyzing customer chum of credit cards by using logistics regression, wherein by using the L2 regularization, this invention improves the training times significantly and avoids some of the technical difficulties such as overfitting; L2 regularization reduces the weight of a function by adding regular terms to each dependent variable.
2. The method of analyzing customer chum of credit cards by using logistics regression according to claim 1, during the training process, the optimal parameters are searched by going through representatives 8 sets of parameters, obtaining 8 distinctive models; each model is estimated by various evaluation indicators, guaranteeing the selected model is perfectly fitted; hence, the result is nearly optimal.
i
2019101158 30 Sep 2019
Figure 1
2019101158 30 Sep 2019
Schematic Diagram
Figure 2
2019101158 30 Sep 2019
Confusion Metrix
Predicted label
Figure 3
2019101158 30 Sep 2019
Figure 4
AUC plot
Figure 5
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2019101158A AU2019101158A4 (en) | 2019-09-30 | 2019-09-30 | A method of analyzing customer churn of credit cards by using logistics regression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2019101158A AU2019101158A4 (en) | 2019-09-30 | 2019-09-30 | A method of analyzing customer churn of credit cards by using logistics regression |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2019101158A4 true AU2019101158A4 (en) | 2019-10-31 |
Family
ID=68342026
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2019101158A Ceased AU2019101158A4 (en) | 2019-09-30 | 2019-09-30 | A method of analyzing customer churn of credit cards by using logistics regression |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU2019101158A4 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111311125A (en) * | 2020-03-25 | 2020-06-19 | 中国建设银行股份有限公司 | Method and device for selecting resource linkage scheme among bank outlets based on genetic algorithm |
CN111754337A (en) * | 2020-06-30 | 2020-10-09 | 上海观安信息技术股份有限公司 | Method and system for identifying credit card maintenance contract group |
CN113282886A (en) * | 2021-05-26 | 2021-08-20 | 北京大唐神州科技有限公司 | Bank loan default judgment method based on logistic regression |
-
2019
- 2019-09-30 AU AU2019101158A patent/AU2019101158A4/en not_active Ceased
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111311125A (en) * | 2020-03-25 | 2020-06-19 | 中国建设银行股份有限公司 | Method and device for selecting resource linkage scheme among bank outlets based on genetic algorithm |
CN111311125B (en) * | 2020-03-25 | 2022-06-10 | 中国建设银行股份有限公司 | Method and device for selecting resource linkage scheme among bank outlets based on genetic algorithm |
CN111754337A (en) * | 2020-06-30 | 2020-10-09 | 上海观安信息技术股份有限公司 | Method and system for identifying credit card maintenance contract group |
CN111754337B (en) * | 2020-06-30 | 2024-02-23 | 上海观安信息技术股份有限公司 | Method and system for identifying credit card maintenance card present community |
CN113282886A (en) * | 2021-05-26 | 2021-08-20 | 北京大唐神州科技有限公司 | Bank loan default judgment method based on logistic regression |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11599953B1 (en) | Insurance risk scoring based on credit utilization ratio | |
US11631032B2 (en) | Failure feedback system for enhancing machine learning accuracy by synthetic data generation | |
AU2019101158A4 (en) | A method of analyzing customer churn of credit cards by using logistics regression | |
EP3220331A1 (en) | Behavioral misalignment detection within entity hard segmentation utilizing archetype-clustering | |
US20170018030A1 (en) | System and Method for Determining Credit Worthiness of a User | |
CN110246031A (en) | Appraisal procedure, system, equipment and the storage medium of business standing | |
CN106611375A (en) | Text analysis-based credit risk assessment method and apparatus | |
US20100161526A1 (en) | Ranking With Learned Rules | |
KR20200075120A (en) | Business default prediction system and operation method thereof | |
CN107633455A (en) | Credit estimation method and device based on data model | |
Tiwari | Supervised learning: From theory to applications | |
CN110223182A (en) | A kind of Claims Resolution air control method, apparatus and computer readable storage medium | |
US9037607B2 (en) | Unsupervised analytical review | |
CN109685321A (en) | Event risk method for early warning, electronic equipment and medium based on data mining | |
KR20200123726A (en) | Planning system and method of financial contents | |
US20210216845A1 (en) | Synthetic clickstream testing using a neural network | |
CN109102396A (en) | A kind of user credit ranking method, computer equipment and readable medium | |
CN115205011B (en) | Bank user portrait model generation method based on LSF-FC algorithm | |
CN114170000A (en) | Credit card user risk category identification method, device, computer equipment and medium | |
Özdemir et al. | Website performances of commercial banks in Turkey | |
CN113488127B (en) | Sensitivity processing method and system for population health data set | |
Gómez-Restrepo et al. | Detection of Fraudulent Transactions Through a Generalized Mixed Linear Models | |
CN117670350A (en) | Transaction anti-fraud early warning method and device based on multi-model integration | |
CN116720118A (en) | Label quality intelligent analysis method and device, electronic equipment and storage medium | |
CN113283979A (en) | Loan credit evaluation method and device for loan applicant and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |