CN111783873A - Incremental naive Bayes model-based user portrait method and device - Google Patents

Incremental naive Bayes model-based user portrait method and device Download PDF

Info

Publication number
CN111783873A
CN111783873A CN202010608841.6A CN202010608841A CN111783873A CN 111783873 A CN111783873 A CN 111783873A CN 202010608841 A CN202010608841 A CN 202010608841A CN 111783873 A CN111783873 A CN 111783873A
Authority
CN
China
Prior art keywords
data
naive bayes
incremental
training
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010608841.6A
Other languages
Chinese (zh)
Other versions
CN111783873B (en
Inventor
朱芳鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202010608841.6A priority Critical patent/CN111783873B/en
Publication of CN111783873A publication Critical patent/CN111783873A/en
Application granted granted Critical
Publication of CN111783873B publication Critical patent/CN111783873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a user portrait method and a user portrait device based on an incremental naive Bayesian model, wherein the method comprises the following steps: acquiring user data, and preprocessing the user data to generate attribute information of the user data; inputting attribute information of user data into a trained incremental naive Bayes model to generate category information of the user, wherein the incremental naive Bayes model is obtained by incremental learning through incremental data based on the trained weighted naive Bayes model; and constructing the user portrait of the user according to the category information of the user. By the method and the device, training resources can be saved, training efficiency is improved, and accuracy of user portrayal can be further improved.

Description

Incremental naive Bayes model-based user portrait method and device
Technical Field
The invention relates to the field of data processing, in particular to a user portrait method and device based on an incremental naive Bayesian model.
Background
By taking a client as a center, by means of a data lake and utilizing a big data and artificial intelligence technology, the enterprise-level intelligent panoramic client portrait is capable of getting through the barriers of the association relationship between individuals and legal persons, providing 360-degree omnibearing client information, being in butt joint with all business systems of a general branch in modes of online service, page service, API (application program interface) service, flexible query and the like, providing client portrait data sharing service, constructing information full integration, client full coverage, association full disclosure, flow full support, unified full rows, three-dimensional and multi-dimensional enterprise-level intelligent panoramic client portrait.
At present, in the process of modeling a user, a plurality of methods for classifying problems are provided, such as a Support Vector Machine (SVM), a Neural Network (NN), a Logistic Regression (LR), and a Naive Bayes (NB), wherein the naive bayes algorithm originates from a classical mathematical theory, has stable classification efficiency, is insensitive to missing data, is suitable for incremental training and the like, and is widely applied to the field of user portrayal.
In the face of increasing data and changing features, the model needs to be updated continuously to adapt to new changes, however, in the process of training the model, in the face of incremental data, retraining is needed together with the original data, which increases resource consumption and training burden.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for user portrayal based on an incremental naive bayes model, so as to solve at least one of the above-mentioned problems.
According to a first aspect of the invention, there is provided a user portrayal method based on an incremental naive bayes model, the method comprising: acquiring user data, and preprocessing the user data to generate attribute information of the user data; inputting the attribute information of the user data into a trained incremental naive Bayes model to generate category information of the user, wherein the incremental naive Bayes model is obtained by incremental learning through incremental data based on the trained weighted naive Bayes model; and constructing the user portrait of the user according to the category information of the user.
According to a second aspect of the invention, there is provided a user-portrayal device based on an incremental naive bayes model, the device comprising: the data preprocessing unit is used for acquiring user data and preprocessing the user data to generate attribute information of the user data; the category information generation unit is used for inputting the attribute information of the user data into a trained incremental naive Bayesian model to generate the category information of the user, wherein the incremental naive Bayesian model is obtained by incremental learning through incremental data based on the trained weighted naive Bayesian model; and the user portrait constructing unit is used for constructing the user portrait of the user according to the category information of the user.
According to a third aspect of the present invention, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method when executing the program.
According to a fourth aspect of the invention, the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the above-mentioned method.
According to the technical scheme, the attribute information of the user data is obtained by preprocessing the user data, and then the attribute information is input into the trained incremental naive Bayes model to obtain the corresponding category information, so that the user portrait can be constructed according to the category information.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow diagram of a user portrayal method based on an incremental naive Bayes model in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a principal component analysis extraction flow according to an embodiment of the invention;
FIG. 3 is a schematic diagram of user attribute and category structure processing according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the construction of a weighted naive Bayes model in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of the construction of an incremental naive Bayes model in accordance with an embodiment of the invention;
FIG. 6 is a block diagram of a user-portrayal device based on an incremental naive Bayes model, in accordance with an embodiment of the present invention;
FIG. 7 is a block diagram illustrating a detailed configuration of a user-portrayal device based on an incremental naive Bayes model in accordance with an embodiment of the present invention;
FIG. 8 is an overall architecture block diagram of an incremental naive Bayes model in accordance with an embodiment of the invention;
FIG. 9 is a block diagram of a data pre-processing module according to an embodiment of the invention;
fig. 10 is a schematic block diagram of a system configuration of an electronic apparatus 600 according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In view of the incremental data faced in the existing training model process, retraining is required together with the original data, which increases the resource consumption and the training burden. Based on the above, the embodiment of the invention provides an incremental naive Bayes algorithm suitable for the user portrait field, and the effects of saving training resources and improving training efficiency can be achieved through incremental iterative training. Embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a user portrayal method based on an incremental naive bayes model according to an embodiment of the invention, as shown in fig. 1, the method comprising:
step 101, obtaining user data, and preprocessing the user data to generate attribute information of the user data.
Specifically, the preprocessing here includes: and performing data cleaning operation, data integration operation and data protocol operation on the user data.
And 102, inputting the attribute information of the user data into a trained incremental naive Bayes model to generate class information of the user, wherein the incremental naive Bayes model is obtained by incremental learning through incremental data based on the trained weighted naive Bayes model.
And 103, constructing the user portrait of the user according to the category information of the user.
The user data are preprocessed to obtain the attribute information of the user data, and then the attribute information is input into the trained incremental naive Bayes model to obtain the corresponding category information, so that the user portrait can be constructed according to the category information.
In step 101, the preprocessing operation specifically includes: (1) the data is cleaned, so that the data quality can be improved, and the method specifically comprises the following steps: data analysis and format conversion, repeated data removal and the like; (2) performing integrated operation on data, mainly performing integrated statistical analysis and integration on specific behavior data of a user, and integrating data into a fact tag, namely an attribute; (3) the data is subjected to protocol operation, and the dimensionality and the data volume of the data are mainly reduced so as to achieve the purpose of reducing the data scale. These three operations are described in detail below.
(1) Cleaning operation for data
And the data is cleaned, so that the quality of the data can be improved. Specifically, the cleaning operation mainly includes four steps: (1) data auditing is carried out; (2) defining a data cleaning workflow; (3) executing a data washing workflow; (4) and (5) subsequent processing and control.
In the specific implementation process, the data cleaning mainly comprises the following steps: (1) data analysis and conversion; (2) enforcing integrity constraints; (3) data removal is repeated, etc.
(2) Performing integrated operations on data
The data sources of different sources, such as user original information data, user asset detail data, behavior detail data, consumption detail data and the like, are integrated, the problems of semantic difference, structural difference, incidence relation among fields, redundant repetition of data and the like in the data sources are solved, and a uniform view is provided for a user. By carrying out integrated statistical analysis and integration on the user asset detail data, behavior detail data and consumption detail data, the original data is integrated into a fact tag, namely an attribute, for example, by analyzing the user behavior detail data, the information such as the monthly life times and the active time of the user is counted.
(3) Performing specification operations on data
And the data is subjected to protocol operation, so that the dimensionality and the data volume of the data are reduced, and the aim of reducing the data scale can be fulfilled. The specification operation mainly comprises two methods: a dimension Reduction (dimensional Reduction) and a number Reduction (numerical Reduction). In the embodiment of the invention, a dimension reduction method is mainly adopted, independent variables are reconstructed through Principal Component Analysis (PCA), the number of the independent variables and the correlation among the independent variables are reduced, and the dimension and the data volume are reduced by deleting redundant attributes through Feature Subset Selection (FSS). For example, by upwards extracting original information of the user such as age, gender, and academic calendar, finding a general basic attribute of the user and upwards extracting the general basic attribute in combination with information such as geographic location to become a population attribute, upper-layer labels are all label sets abstracted through PCA principal component analysis, and lower-layer redundant attributes are selected and deleted by using a feature set, which may be specifically referred to the PCA extraction flow shown in fig. 2.
After data preprocessing, attributes can be extracted from the original data, which can be specifically referred to as the user attribute and category structure processing diagram of fig. 3. As shown in fig. 3, the user attributes, such as user value, activation times, financing type, purchase amount, behavior times, and activation time, are classified and analyzed by an incremental naive bayes model, and the user can be classified based on user category, financing preference, risk prediction, attrition prediction, and the like. For example, the user categories may be divided into high value users after 90, medium value users after 80, etc.; the financing preference can be divided into a small wealth instant security type, a balance type, a conservation type and the like; the risk loss prediction is divided into high-risk easily-lost customers, low-risk not easily-lost customers and the like. Therefore, the method can be used for labeling the client, is convenient for constructing the user portrait, and can be used for carrying out targeted client maintenance or risk loss saving and the like on the basis of the user portrait.
For a better understanding of the embodiments of the present invention, the training process of the weighted naive bayes model and the incremental naive bayes model is described in detail below.
(1) Weighted naive Bayes model training process
In practice, the weighted naive bayes model can be trained by: acquiring preprocessed historical user data, wherein the historical user data comprises: attribute information and category information; dividing the historical user data into training set data, test set data and candidate increment set data according to a preset rule, wherein the candidate increment set data comprises: a plurality of incremental data; and training the weighted naive Bayes model according to the training set data and the test set data.
Specifically, training the weighted naive bayes model based on the training set data and the test set data comprises: and constructing an initial weighted naive Bayes model according to the attribute information and the category information in the training set data, wherein the attribute information comprises: a plurality of attributes; and then updating the initial weighted naive Bayes model according to the test set data to obtain a trained weighted naive Bayes model.
When an initial weighted naive Bayes model is constructed, the weight of each attribute in each data attribute information in the training set data can be determined based on information gain; and then constructing an initial weighted naive Bayes model according to the attribute information, the category information and the weight of each attribute in the training set data.
One embodiment of a weighted naive bayes model training process is given below.
Before a weighted naive Bayes model (or called a classifier) is constructed, a data set is divided into a training set D, a candidate increment set D1 and a test set D2 through data cleaning, data integration and data specification, wherein the proportion of the training set D, the candidate increment set D1 and the test set D2 is respectively 50%, 40% and 10%. Data set D (i.e., training set D), whose attribute set is: u ═ A1,A2,...,AnC }, wherein A1,A2,...,AnIs an attribute variable of sample data, and C is m C1,C2,...CmIs used as the classification variable. Each sample X in the data set D may be denoted as X ═ X1,x2,...,xn,Cj},x1,x2,...,xnAre respectively A1,A2,...,AnN is taken as a value, and CjIs the class value of the class to which it belongs.
Given a sample X with unknown class, a naive Bayes model carries out classification prediction on the X, and the predicted X belongs to a sample with the maximum posterior probability P (C)j| X), specifically by the following formula (1):
Figure BDA0002561644350000061
wherein, the denominator P (X) is a constant, and the naive Bayes classification model obtained according to the Bayes formula and the independence of the naive Bayes condition is as follows:
Figure BDA0002561644350000062
because the importance of each condition attribute to the classification result is different, the assumption of the conditional independence of naive Bayes needs to be corrected and expanded into weighted naive Bayes in practical application. The embodiment of the invention utilizes information gain to weight the classification attributes. Information Gain (IG) is an important concept in Information theory, and in IG, the importance of attributes can be measured by the size of Information amount brought to a classification system by the attributes, and the larger the value is, the larger the Information amount contained in the Information Gain is. The information gain brought by the attribute a to the category C is expressed by the following formula (3):
Gain(A)=H(C)-H(C|A) (3)
wherein the content of the first and second substances,
Figure BDA0002561644350000063
Figure BDA0002561644350000064
in the formula (4), H (C) is the entropy of the class C, m is the number of values of the class C, and P (C)j) Is that the sample belongs to class CjThe probability of (c). In the formula (5), H (C | A) is the conditional entropy when the attribute A is determined, n is the number of values of the attribute A, and P (C | A) isj|xi) Is that the attribute A takes the value xiUnder the condition of CjThe probability of (c).
By calculating the information Gain (A) of each attributek) The weight of each feature attribute can be obtained as shown in the following equation (6):
Figure BDA0002561644350000071
then, the weighted naive bayes classification model is:
Figure BDA0002561644350000072
as can be seen from the above description, the construction process of the classifier is shown in fig. 4, and specifically includes the following steps:
step 401: after data preprocessing, a training set D, a candidate increment set D1 and a test set D2 of the classifier are obtained, and the proportion of the training set D, the candidate increment set D1 and the test set D2 is respectively 50%, 40% and 10%.
Step 402: for the training set D, the class prior probability P (C) is calculatedj) Sum class conditional probability P (x)k|Cj). Wherein, P (C)j) Can pass through
Figure BDA0002561644350000073
Estimate it, | CjIs of class CjN is the total number of samples, P (x)k|Cj) I.e. feature item (attribute) xkIn class CjThe probability of occurrence of (a) in (b),
Figure BDA0002561644350000074
|xjkand | represents the number of kth feature items in the jth category.
Step 403: for the training set D, the weight of each attribute is calculated according to equation (6) above.
Step 404: based on the parameters in steps 402 and 403, a weighted classifier (i.e., a weighted naive bayes classification model) is constructed.
Step 405: for the test set D2, the probability of each test data is calculated according to the above formula (7), and the test data X is classified as the probability PiThe largest category, the classification of each data in the test set is obtained.
Comparing the classification result of the model with the classification of the test data, and calculating the classification accuracy
Figure BDA0002561644350000075
Wherein, TP is the number of samples which are correctly divided into positive examples, that is, the number of samples which are actually positive examples and are divided into positive examples by the classifier; FP is the number of cases wrongly divided into positive cases, i.e. actually negative cases but classified by the classifierThe number of samples is divided into positive examples. When the classification accuracy is above a predetermined value (e.g., 85%), it indicates that the weighted naive bayes classification model training is complete.
And then, based on the trained weighted naive Bayes classification model, the incremental naive Bayes model can be trained through incremental data.
Specifically, firstly, inputting the attribute information of each incremental data into the trained weighted naive Bayes model to obtain the class information output by the model; and then, generating error sample set data according to the class information output by the model, wherein the class information of the incremental data in the error sample set data is different from the class information output by the model, namely, if the class information output by the model is different from the class information of the incremental data, the model can not accurately predict the classification of the incremental data, and the incremental data is added into the error sample set data. The trained weighted naive Bayes model is then trained based on the error sample set data and the test set data to obtain the incremental naive Bayes model.
In one embodiment, the weight of each attribute of each data in the training set data can be updated according to preset weights of training set data and error sample set data, attribute information and category information of each incremental data in the error sample set data; and then training the weighted naive Bayes model according to the updated weight of each attribute, the attribute information and the category information of each incremental data in the error sample set data and the test set data.
One embodiment of an incremental naive bayes model training process is given below.
In the embodiment of the present invention, the idea of incremental learning is mainly: and classifying the candidate sample set by using the constructed weighted naive Bayes model aiming at the candidate sample set, and discussing the classification result by dividing the classification result into two conditions aiming at each candidate sample. The first method comprises the following steps: if the sample is classified correctly, no processing is performed on the sample. And the second method comprises the following steps: the sample classification results are erroneous, and the incremental data are needed to correct the model parameters to accommodate the changes brought by the new samples. That is, samples that help improve the accuracy of the current classifier (i.e., the weighted naive bayes model) are selected and added to the training set to modify the current classification parameters to obtain a new model (corresponding to the incremental naive bayes model).
Assuming that the training set is D, the candidate sample set is D ', and the sample X ' is incremental data in the candidate sample set, when the sample X ' is { X ═ X1,x2,...,xnWhen the C' } is added into the training set D, the class prior probability and the class condition probability need to be recalculated, and the correction method of the class prior probability and the class condition probability is as follows (8) and (9):
Figure BDA0002561644350000081
where λ is the sum of the number of samples in the incremental set and the number of samples in the training set, j is the number of classes, and j is 1, 2.
Figure BDA0002561644350000082
Wherein ξ is AiNumber of values and class C in training setjSum of the number of (A)iIs an attribute variable of the sample data.
Due to the addition of the new sample X', new sample data is added into the training set, and both the class prior probability and the class conditional probability are changed. As can be seen from the above formulas (8) and (9), the addition of X' requires only partial modification.
And the correction of the weight value comprises the following steps: calculating the weight W of each attribute by using information gain method for newly added candidate sample setk-increasedAnd k is 1,2 … …, n, and then the obtained weight is integrated with the attribute weight obtained by the original training set to obtain a new weight:
Wk-new=αWk+βWk-increased(10)
where α + β is 1, and α and β are two parameters used to set the attribute weights of the training set and the candidate sample set. In actual operation, values of α and β are adjusted through model training, and finally, the weight value α is 0.70 and the weight value β is 0.30 based on comparison of classification accuracy of a weighted bayesian algorithm.
That is, the training set is trained, each attribute has a corresponding weight, and the newly added candidate sample set modifies the weight of the original attribute so as to adapt to the change brought by the new sample. In the implementation process, for the weighted naive bayesian classifier C (i.e., a trained weighted naive bayesian model), the candidate incremental sample set D1, and the classification error sample set S (corresponding to the above error sample set), before classifying the candidate incremental sample set, S ═ Φ, the collection of the classification error sample set is to modify the model parameters of the classifier C as incremental data subsequently, so as to ensure that the model performs better on the classification error sample set.
The process of updating the model parameters of the classifier C according to the incremental data is shown in fig. 5, and the flow includes:
step 501: each sample in the candidate sample set D1 is classified by the classifier C, and the misclassified sample is saved in the misclassified sample set S.
Step 502: and if the classification error sample set S is not equal to phi, updating the class prior probability and the class conditional probability, and respectively recalculating the class prior probability and the class conditional probability according to the formulas (8) and (9).
Step 503: for the classification error sample set S, the weight of each attribute is calculated by using the formula (6), and the corrected weight is obtained by using the formula (10).
Step 504: updating each parameter of the class C, namely updating the parameters of the steps 402 and 403 to obtain a new classifier C-new;
step 505: for the test set D2, the classification accuracy probability of each test data is calculated according to equation (7), and each test data X is classified into a probability PiObtaining the classification of the test data for the largest class, comparing the classification result with the class of the test data, and calculating the classification accuracy
Figure BDA0002561644350000101
Calculating pointsClass accuracy. When the classification accuracy is above a predetermined value (e.g., 85%), it indicates that incremental naive bayes model training is complete.
The embodiment of the invention can save training resources and improve the effect of training efficiency through incremental iterative training, and can further improve the accuracy of user portrait to cope with new changes by adding valuable samples into training data, expanding sample information, enriching sample information quantity and updating parameters of a model. The embodiment of the invention solves the problem that the traditional Bayesian model needs to learn all samples again when wanting to learn the information contained in the new samples, thereby consuming a great amount of time and resources.
Based on similar inventive concepts, the embodiment of the present invention further provides a user portrait apparatus based on an incremental naive bayes model, and preferably, the apparatus is used for implementing the processes in the above method embodiments.
FIG. 6 is a block diagram of an incremental naive Bayesian model based user-portrayal apparatus, as shown in FIG. 6, according to an embodiment of the invention, comprising: a data preprocessing unit 61, a category information generation unit 62 and a user portrait construction unit 63, wherein:
the data preprocessing unit 61 is configured to acquire user data and preprocess the user data to generate attribute information of the user data.
Specifically, the data preprocessing unit is used for performing data cleaning operation, data integration operation and data specification operation on the user data.
And performing data cleaning operation, data integration operation and data specification operation on the user data.
A category information generating unit 62, configured to input attribute information of the user data into a trained incremental naive bayes model to generate category information of the user, where the incremental naive bayes model is obtained by incremental learning through incremental data based on a trained weighted naive bayes model.
A user representation construction unit 63 for constructing a user representation of the user according to the category information of the user.
The data preprocessing unit 61 is used for preprocessing the user data to obtain attribute information of the user data, then the category information generating unit 62 is used for inputting the attribute information into the trained incremental naive Bayes model to obtain corresponding category information, and the user portrait constructing unit 63 is used for constructing a user portrait according to the category information.
In practical operation, as shown in fig. 7, the above apparatus further comprises: a weighted naive bayes model training unit 64 for training the weighted naive bayes model.
Specifically, the weighted naive bayes model training unit comprises: historical data acquisition module, historical data divide module and weighted naive Bayesian model training module, wherein:
a historical data obtaining module, configured to obtain preprocessed historical user data, where the historical user data includes: attribute information and category information;
a historical data dividing module, configured to divide the historical user data into training set data, test set data, and candidate increment set data according to a predetermined rule, where the candidate increment set data includes: a plurality of incremental data;
and the weighted naive Bayes model training module is used for training the weighted naive Bayes model according to the training set data and the test set data.
The weighted naive Bayes model training module specifically comprises: an initial model construction sub-module and a weighted naive Bayes model training sub-module, wherein:
the initial model building submodule is used for building an initial weighted naive Bayes model according to the attribute information and the category information in the training set data;
and the weighted naive Bayes model training sub-module is used for updating the initial weighted naive Bayes model according to the test set data so as to obtain a trained weighted naive Bayes model.
In actual operation, the attribute information includes: a plurality of attributes. The initial model building submodule is specifically configured to: determining the weight of each attribute in the data attribute information in the training set data based on information gain; and constructing an initial weighted naive Bayes model according to the attribute information, the category information and the weight of each attribute in the training set data.
In one embodiment, the above apparatus further comprises: an incremental naive bayes model training unit 65 for training said incremental naive bayes model.
The incremental naive Bayes model training unit comprises: the device comprises a category information output module, an error sample set generation module and an incremental naive Bayes model training module, wherein:
the category information output module is used for inputting the attribute information of each incremental data into the trained weighted naive Bayesian model to obtain category information output by the model;
the error sample set generating module is used for generating error sample set data according to the class information output by the model, wherein the class information of the incremental data in the error sample set data is different from the class information output by the model;
an incremental naive Bayes model training module for training the trained weighted naive Bayes model according to the error sample set data and the test set data to obtain the incremental naive Bayes model.
The incremental naive Bayes model training module specifically comprises: a weight update sub-module and an incremental naive Bayes model training sub-module, wherein:
the weight updating submodule is used for updating the weight of each attribute of each data in the training set data according to the preset weights of the training set data and the error sample set data, the attribute information and the category information of each incremental data in the error sample set data;
and the incremental naive Bayes model training sub-module is used for training the weighted naive Bayes model according to the updated weight of each attribute, the attribute information and the category information of each incremental data in the error sample set data and the test set data.
For specific execution processes of the units, the modules, and the sub-modules, reference may be made to the description in the foregoing method embodiments, and details are not described here again.
In practical operation, the units, the modules and the sub-modules may be combined or may be arranged singly, and the present invention is not limited thereto.
Fig. 8 is a block diagram of the overall architecture of an incremental naive bayes model (shown as a classifier) according to an embodiment of the invention, as shown in fig. 8, the architecture comprising: the device comprises a data acquisition module 1, a data preprocessing module 2, a Bayesian modeling module 3 and an incremental learning module 4.
Based on the framework, firstly, data lake data are used for obtaining a data set used by the user portrait. The data preprocessing module 2 carries out data processing on the acquired data set, the Bayesian modeling module 3 establishes a weighted Bayesian classification model, the incremental learning module serves as the core of the embodiment of the invention, and the purchased classifier parameters are modified by selecting valuable data, so that the classification efficiency of the classifier can be improved, and the accuracy of user portrayal can be effectively improved.
As shown in fig. 9, the data preprocessing module 2 includes: a data cleaning unit 901 for performing a cleaning operation on data; a data integration unit 902, configured to perform an integration operation on data; and the data reduction unit 903 is used for performing reduction operation on data.
Through data preprocessing operation, redundant data are removed, the dimensionality and the data volume of the data are reduced, the scale of the training set subset or the incremental data set subset is far smaller than that of an actual training set and an incremental data set, and the speed of classifier construction and incremental learning is increased.
The present embodiment also provides an electronic device, which may be a desktop computer, a tablet computer, a mobile terminal, and the like, but is not limited thereto. In this embodiment, the electronic device may be implemented by referring to the above method embodiment and the user portrait apparatus embodiment based on the incremental naive bayes model, which are incorporated herein, and repeated details are not repeated herein.
Fig. 10 is a schematic block diagram of a system configuration of an electronic apparatus 600 according to an embodiment of the present invention. As shown in fig. 10, the electronic device 600 may include a central processor 100 and a memory 140; the memory 140 is coupled to the central processor 100. Notably, this diagram is exemplary; other types of structures may also be used in addition to or in place of the structure to implement telecommunications or other functions.
In one embodiment, the incremental naive bayes model based user portrait functionality can be integrated into the central processor 100. The central processor 100 may be configured to control as follows:
acquiring user data, and preprocessing the user data to generate attribute information of the user data;
inputting the attribute information of the user data into a trained incremental naive Bayes model to generate category information of the user, wherein the incremental naive Bayes model is obtained by incremental learning through incremental data based on the trained weighted naive Bayes model;
and constructing the user portrait of the user according to the category information of the user.
As can be seen from the above description, in the electronic device provided in the embodiment of the present application, the attribute information of the user data is obtained by preprocessing the user data, and then the attribute information is input into the trained incremental naive bayes model to obtain the corresponding category information, so that the user portrait can be constructed according to the category information.
In another embodiment, the incremental naive bayes model based user-representation device may be configured separately from the central processor 100, for example, the incremental naive bayes model based user-representation device may be configured as a chip connected to the central processor 100, and the incremental naive bayes model based user-representation device may be implemented by control of the central processor.
As shown in fig. 10, the electronic device 600 may further include: communication module 110, input unit 120, audio processing unit 130, display 160, power supply 170. It is noted that the electronic device 600 does not necessarily include all of the components shown in FIG. 10; furthermore, the electronic device 600 may also comprise components not shown in fig. 10, which may be referred to in the prior art.
As shown in fig. 10, the central processor 100, sometimes referred to as a controller or operational control, may include a microprocessor or other processor device and/or logic device, the central processor 100 receiving input and controlling the operation of the various components of the electronic device 600.
The memory 140 may be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The information relating to the failure may be stored, and a program for executing the information may be stored. And the central processing unit 100 may execute the program stored in the memory 140 to realize information storage or processing, etc.
The input unit 120 provides input to the cpu 100. The input unit 120 is, for example, a key or a touch input device. The power supply 170 is used to provide power to the electronic device 600. The display 160 is used to display an object to be displayed, such as an image or a character. The display may be, for example, an LCD display, but is not limited thereto.
The memory 140 may be a solid state memory such as Read Only Memory (ROM), Random Access Memory (RAM), a SIM card, or the like. There may also be a memory that holds information even when power is off, can be selectively erased, and is provided with more data, an example of which is sometimes called an EPROM or the like. The memory 140 may also be some other type of device. Memory 140 includes buffer memory 141 (sometimes referred to as a buffer). The memory 140 may include an application/function storage section 142, and the application/function storage section 142 is used to store application programs and function programs or a flow for executing the operation of the electronic device 600 by the central processing unit 100.
The memory 140 may also include a data store 143, the data store 143 for storing data, such as contacts, digital data, pictures, sounds, and/or any other data used by the electronic device. The driver storage portion 144 of the memory 140 may include various drivers of the electronic device for communication functions and/or for performing other functions of the electronic device (e.g., messaging application, address book application, etc.).
The communication module 110 is a transmitter/receiver 110 that transmits and receives signals via an antenna 111. The communication module (transmitter/receiver) 110 is coupled to the central processor 100 to provide an input signal and receive an output signal, which may be the same as in the case of a conventional mobile communication terminal.
Based on different communication technologies, a plurality of communication modules 110, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, may be provided in the same electronic device. The communication module (transmitter/receiver) 110 is also coupled to a speaker 131 and a microphone 132 via an audio processor 130 to provide audio output via the speaker 131 and receive audio input from the microphone 132 to implement general telecommunications functions. Audio processor 130 may include any suitable buffers, decoders, amplifiers and so forth. In addition, an audio processor 130 is also coupled to the central processor 100, so that recording on the local can be enabled through a microphone 132, and so that sound stored on the local can be played through a speaker 131.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the incremental naive bayes model-based user portrait method.
In summary, the embodiment of the present invention provides an effective incremental naive bayes user image scheme, and a weighted naive bayes model is updated by incremental learning, so that compared with the conventional method that a model is required to be updated to adapt to new sample data, the incremental learning method of the embodiment of the present invention has the disadvantage that all samples must be learned again. In the face of a user portrait system which is rapidly developed, data outbreak is increased, and business requirements are continuously updated, the incremental naive Bayes model can reduce computing resources on the classification problem of the user portrait, and can update model parameters more quickly by learning new samples, so that the model has a better effect on the classification problem of the user portrait.
The preferred embodiments of the present invention have been described above with reference to the accompanying drawings. The many features and advantages of the embodiments are apparent from the detailed specification, and thus, it is intended by the appended claims to cover all such features and advantages of the embodiments which fall within the true spirit and scope thereof. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the embodiments of the invention to the exact construction and operation illustrated and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope thereof.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (16)

1. A user portrayal method based on an incremental naive Bayes model, the method comprising:
acquiring user data, and preprocessing the user data to generate attribute information of the user data;
inputting the attribute information of the user data into a trained incremental naive Bayes model to generate category information of the user, wherein the incremental naive Bayes model is obtained by incremental learning through incremental data based on the trained weighted naive Bayes model;
and constructing the user portrait of the user according to the category information of the user.
2. The method of claim 1, wherein preprocessing the user data comprises:
and performing data cleaning operation, data integration operation and data specification operation on the user data.
3. The method of claim 1, wherein the weighted naive bayes model is trained by:
acquiring preprocessed historical user data, wherein the historical user data comprises: attribute information and category information;
dividing the historical user data into training set data, test set data and candidate increment set data according to a preset rule, wherein the candidate increment set data comprises: a plurality of incremental data;
and training the weighted naive Bayes model according to the training set data and the test set data.
4. The method of claim 3, wherein training the weighted naive Bayes model based on the training set data and the test set data comprises:
constructing an initial weighted naive Bayes model according to attribute information and category information in the training set data;
and updating the initial weighted naive Bayes model according to the test set data to obtain a trained weighted naive Bayes model.
5. The method of claim 4, wherein the attribute information comprises: the method for constructing the initial weighted naive Bayes model according to the attribute information and the category information in the training set data comprises the following steps:
determining the weight of each attribute in the data attribute information in the training set data based on information gain;
and constructing an initial weighted naive Bayes model according to the attribute information, the category information and the weight of each attribute in the training set data.
6. The method of claim 5, wherein the incremental naive Bayes model is trained by:
inputting the attribute information of each incremental data into the trained weighted naive Bayes model to obtain class information output by the model;
generating error sample set data according to the class information output by the model, wherein the class information of incremental data in the error sample set data is different from the class information output by the model;
training the trained weighted naive Bayes model according to the error sample set data and the test set data to obtain the incremental naive Bayes model.
7. The method of claim 6, wherein training the trained weighted naive Bayes model based on the erroneous sample set data and the test set data comprises:
updating the weight of each attribute of each data in the training set data according to preset weights of the training set data and error sample set data, attribute information and category information of each incremental data in the error sample set data;
and continuously training the trained weighted naive Bayes model according to the updated weight of each attribute, the attribute information and the category information of each incremental data in the error sample set data and the test set data.
8. An incremental naive bayes model-based user portrayal apparatus, the apparatus comprising:
the data preprocessing unit is used for acquiring user data and preprocessing the user data to generate attribute information of the user data;
the category information generation unit is used for inputting the attribute information of the user data into a trained incremental naive Bayesian model to generate the category information of the user, wherein the incremental naive Bayesian model is obtained by incremental learning through incremental data based on the trained weighted naive Bayesian model;
and the user portrait constructing unit is used for constructing the user portrait of the user according to the category information of the user.
9. The apparatus according to claim 8, wherein the data preprocessing unit is specifically configured to:
and performing data cleaning operation, data integration operation and data specification operation on the user data.
10. The apparatus of claim 8, further comprising: a weighted naive Bayes model training unit for training the weighted naive Bayes model,
the weighted naive Bayes model training unit comprises:
a historical data obtaining module, configured to obtain preprocessed historical user data, where the historical user data includes: attribute information and category information;
a historical data dividing module, configured to divide the historical user data into training set data, test set data, and candidate increment set data according to a predetermined rule, where the candidate increment set data includes: a plurality of incremental data;
and the weighted naive Bayes model training module is used for training the weighted naive Bayes model according to the training set data and the test set data.
11. The apparatus of claim 10, wherein the weighted naive bayes model training module comprises:
the initial model building submodule is used for building an initial weighted naive Bayes model according to the attribute information and the category information in the training set data;
and the weighted naive Bayes model training sub-module is used for updating the initial weighted naive Bayes model according to the test set data so as to obtain a trained weighted naive Bayes model.
12. The apparatus of claim 11, wherein the attribute information comprises: the initial model building submodule is specifically configured to:
determining the weight of each attribute in the data attribute information in the training set data based on information gain;
and constructing an initial weighted naive Bayes model according to the attribute information, the category information and the weight of each attribute in the training set data.
13. The apparatus of claim 12, further comprising: an incremental naive Bayes model training unit for training the incremental naive Bayes model,
the incremental naive Bayes model training unit comprises:
the category information output module is used for inputting the attribute information of each incremental data into the trained weighted naive Bayesian model to obtain category information output by the model;
the error sample set generating module is used for generating error sample set data according to the class information output by the model, wherein the class information of the incremental data in the error sample set data is different from the class information output by the model;
an incremental naive Bayes model training module for training the trained weighted naive Bayes model according to the error sample set data and the test set data to obtain the incremental naive Bayes model.
14. The apparatus of claim 13, wherein the incremental naive bayes model training module comprises:
the weight updating submodule is used for updating the weight of each attribute of each data in the training set data according to the preset weights of the training set data and the error sample set data, the attribute information and the category information of each incremental data in the error sample set data;
and the incremental naive Bayes model training sub-module is used for training the weighted naive Bayes model according to the updated weight of each attribute, the attribute information and the category information of each incremental data in the error sample set data and the test set data.
15. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the processor executes the program.
16. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010608841.6A 2020-06-30 2020-06-30 User portrait method and device based on increment naive Bayes model Active CN111783873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010608841.6A CN111783873B (en) 2020-06-30 2020-06-30 User portrait method and device based on increment naive Bayes model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010608841.6A CN111783873B (en) 2020-06-30 2020-06-30 User portrait method and device based on increment naive Bayes model

Publications (2)

Publication Number Publication Date
CN111783873A true CN111783873A (en) 2020-10-16
CN111783873B CN111783873B (en) 2023-08-25

Family

ID=72761087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010608841.6A Active CN111783873B (en) 2020-06-30 2020-06-30 User portrait method and device based on increment naive Bayes model

Country Status (1)

Country Link
CN (1) CN111783873B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598090A (en) * 2021-03-08 2021-04-02 北京冠新医卫软件科技有限公司 Method, device, equipment and system for health portrait
CN112734470A (en) * 2021-01-05 2021-04-30 中国工商银行股份有限公司 Electronic coupon pushing method and device based on customer preference
CN112966071A (en) * 2021-02-03 2021-06-15 北京奥鹏远程教育中心有限公司 User feedback information analysis method, device, equipment and readable storage medium
CN113742543A (en) * 2021-09-22 2021-12-03 中国银行股份有限公司 Data screening method and device, electronic equipment and storage medium
CN113827981A (en) * 2021-08-17 2021-12-24 杭州电魂网络科技股份有限公司 Game loss user prediction method and system based on naive Bayes

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062331A (en) * 2016-11-08 2018-05-22 南京理工大学 Increment type naive Bayesian file classification method based on Lifelong Learning
CN108769198A (en) * 2018-05-29 2018-11-06 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN109145308A (en) * 2018-09-28 2019-01-04 乐山师范学院 A kind of concerning security matters text recognition method based on improvement naive Bayesian
CN109389138A (en) * 2017-08-09 2019-02-26 武汉安天信息技术有限责任公司 A kind of user's portrait method and device
CN109740620A (en) * 2018-11-12 2019-05-10 平安科技(深圳)有限公司 Method for building up, device, equipment and the storage medium of crowd portrayal disaggregated model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062331A (en) * 2016-11-08 2018-05-22 南京理工大学 Increment type naive Bayesian file classification method based on Lifelong Learning
CN109389138A (en) * 2017-08-09 2019-02-26 武汉安天信息技术有限责任公司 A kind of user's portrait method and device
CN108769198A (en) * 2018-05-29 2018-11-06 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN109145308A (en) * 2018-09-28 2019-01-04 乐山师范学院 A kind of concerning security matters text recognition method based on improvement naive Bayesian
CN109740620A (en) * 2018-11-12 2019-05-10 平安科技(深圳)有限公司 Method for building up, device, equipment and the storage medium of crowd portrayal disaggregated model

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734470A (en) * 2021-01-05 2021-04-30 中国工商银行股份有限公司 Electronic coupon pushing method and device based on customer preference
CN112966071A (en) * 2021-02-03 2021-06-15 北京奥鹏远程教育中心有限公司 User feedback information analysis method, device, equipment and readable storage medium
CN112966071B (en) * 2021-02-03 2023-09-08 北京奥鹏远程教育中心有限公司 User feedback information analysis method, device, equipment and readable storage medium
CN112598090A (en) * 2021-03-08 2021-04-02 北京冠新医卫软件科技有限公司 Method, device, equipment and system for health portrait
CN112598090B (en) * 2021-03-08 2021-05-18 北京冠新医卫软件科技有限公司 Method, device, equipment and system for health portrait
CN113827981A (en) * 2021-08-17 2021-12-24 杭州电魂网络科技股份有限公司 Game loss user prediction method and system based on naive Bayes
CN113742543A (en) * 2021-09-22 2021-12-03 中国银行股份有限公司 Data screening method and device, electronic equipment and storage medium
CN113742543B (en) * 2021-09-22 2024-02-23 中国银行股份有限公司 Data screening method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111783873B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
EP3467723B1 (en) Machine learning based network model construction method and apparatus
CN111859960B (en) Semantic matching method, device, computer equipment and medium based on knowledge distillation
CN111783873B (en) User portrait method and device based on increment naive Bayes model
US20230102337A1 (en) Method and apparatus for training recommendation model, computer device, and storage medium
US11381651B2 (en) Interpretable user modeling from unstructured user data
US11373117B1 (en) Artificial intelligence service for scalable classification using features of unlabeled data and class descriptors
US20220172260A1 (en) Method, apparatus, storage medium, and device for generating user profile
CN112070310A (en) Loss user prediction method and device based on artificial intelligence and electronic equipment
CN111950295A (en) Method and system for training natural language processing model
CN112785005A (en) Multi-target task assistant decision-making method and device, computer equipment and medium
CN111898675B (en) Credit wind control model generation method and device, scoring card generation method, machine readable medium and equipment
CN111291618A (en) Labeling method, device, server and storage medium
CN111611390A (en) Data processing method and device
Buskirk et al. Why machines matter for survey and social science researchers: Exploring applications of machine learning methods for design, data collection, and analysis
US20200074277A1 (en) Fuzzy input for autoencoders
CN113569955A (en) Model training method, user portrait generation method, device and equipment
CN115129902B (en) Media data processing method, device, equipment and storage medium
CN116756281A (en) Knowledge question-answering method, device, equipment and medium
US11804214B2 (en) Methods and apparatuses for discriminative pre-training for low resource title compression
US20220309292A1 (en) Growing labels from semi-supervised learning
CN114897607A (en) Data processing method and device for product resources, electronic equipment and storage medium
CN113570044A (en) Customer loss analysis model training method and device
CN113763928A (en) Audio category prediction method and device, storage medium and electronic equipment
CN111768306A (en) Risk identification method and system based on intelligent data analysis
JP2020071737A (en) Learning method, learning program and learning device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant