CN109359793B - Prediction model training method and device for new scene - Google Patents

Prediction model training method and device for new scene Download PDF

Info

Publication number
CN109359793B
CN109359793B CN201810875574.1A CN201810875574A CN109359793B CN 109359793 B CN109359793 B CN 109359793B CN 201810875574 A CN201810875574 A CN 201810875574A CN 109359793 B CN109359793 B CN 109359793B
Authority
CN
China
Prior art keywords
model
migrated
training sample
sample set
new scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810875574.1A
Other languages
Chinese (zh)
Other versions
CN109359793A (en
Inventor
张天翼
陈明星
郭龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201810875574.1A priority Critical patent/CN109359793B/en
Publication of CN109359793A publication Critical patent/CN109359793A/en
Priority to TW108119499A priority patent/TWI818999B/en
Priority to PCT/CN2019/091658 priority patent/WO2020024716A1/en
Application granted granted Critical
Publication of CN109359793B publication Critical patent/CN109359793B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0637Strategic management or analysis, e.g. setting a goal or target of an organisation; Planning actions based on goals; Analysis or evaluation of effectiveness of goals
    • G06Q10/06375Prediction of business process outcome or impact based on a proposed change

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Feedback Control In General (AREA)

Abstract

A prediction model training method and device for a new scene are disclosed. A method of predictive model training for a new scene, the method comprising: obtaining a set of models to be migrated; selecting at least one model from the set of models to be migrated for predictive labeling of unlabeled samples in a new scene; obtaining an initial training sample set in a new scene; adding a prediction label for the label-free sample in the initial training sample set by using the selected model; and updating the model to be migrated by utilizing the initial training sample set added with the prediction label based on a supervised learning algorithm to obtain the model applicable to the new scene.

Description

Prediction model training method and device for new scene
Technical Field
The embodiment of the specification relates to the technical field of internet application, in particular to a prediction model training method and device for a new scene.
Background
The big data era can realize the needed decision function by a machine learning training model based on the accumulated sample data. For example, in a financial risk control scenario, a large amount of transaction data can be used as sample data, and a wind control model is trained through machine learning, so that a risk decision and the like can be automatically made on a new transaction by using the trained wind control model.
However, in some scenarios, it often takes a long time to accumulate sample data and train a model, so as to implement deployment of a machine learning model, for example, data accumulation and training of a wind control model generally takes more than half a year. In this regard, one solution is to deploy and use a historical model in the new scene, which is trained based on historical data in other scenes, but the historical model is generally less effective in being applied in the new scene due to differences between sample data of each scene.
Based on the prior art, a more efficient and accurate predictive model training scheme for new scenes is needed.
Disclosure of Invention
In view of the above technical problems, embodiments of the present specification provide a method and an apparatus for training a prediction model for a new scene, and a technical scheme is as follows:
a method of predictive model training for a new scene, the method comprising:
obtaining a set of models to be migrated, wherein the models to be migrated are as follows: deploying a model which is used in an old scene and can be migrated to a new scene;
selecting at least one model from the set of models to be migrated for predictive labeling of unlabeled samples in a new scene;
obtaining an initial training sample set in a new scene, wherein the initial training sample set comprises unlabeled samples;
adding a prediction label for the label-free sample in the initial training sample set by using the selected model;
and updating the model to be migrated by utilizing the initial training sample set added with the prediction label based on a supervised learning algorithm to obtain the model applicable to the new scene.
An apparatus for predictive model training for a new scene, the apparatus comprising:
a to-be-migrated model obtaining module, configured to obtain a set of to-be-migrated models, where the to-be-migrated model is: deploying a model which is used in an old scene and can be migrated to a new scene;
the labeling model selecting module is used for selecting at least one model from the set of the models to be migrated so as to predict and label the unlabeled samples in the new scene;
the system comprises a sample set acquisition module, a sample set acquisition module and a data processing module, wherein the sample set acquisition module is used for acquiring an initial training sample set in a new scene, and the initial training sample set comprises unlabeled samples;
the sample labeling module is used for adding a prediction label to the unlabeled sample in the initial training sample set by using the selected model;
and the model updating module is used for updating the model to be migrated by utilizing the initial training sample set added with the prediction label based on a supervised learning algorithm to obtain the model applicable to the new scene.
According to the technical scheme provided by the embodiment of the specification, the model deployed and used in the old scene can be migrated to the new scene, and the accumulation time of the sample in the new scene is short, so that under the condition that the sample has no or only a few actual labels, label prediction is performed through the model to be migrated, so that the model to be migrated is further optimized, the models are more suitable for being used in the new scene, and a more efficient and more accurate prediction model training scheme is provided for the new scene.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the invention.
In addition, any one of the embodiments in the present specification is not required to achieve all of the effects described above.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present specification, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic flow chart diagram of a predictive model training method for a new scene in an embodiment of the present description;
FIG. 2 is a schematic flow chart diagram of a method for training a wind control model for a new scene according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a prediction model training apparatus for a new scene according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an apparatus for configuring a device according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present specification, the technical solutions in the embodiments of the present specification will be described in detail below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of protection.
An embodiment of the present specification provides a prediction model training method for a new scenario, and referring to fig. 1, the method may include the following steps:
s101, acquiring a set of models to be migrated;
due to the difference between the new scene and the old scene, some models deployed and used in the old scene may not be suitable for the new scene, and some models deployed and used in the old scene may be suitable for the new scene and may be migrated to the new scene, where the model to be migrated is a model deployed and used in the old scene and may be migrated to the new scene.
The specification does not limit the specific manner in which the set of models to be migrated is obtained.
In this specification, feature vectors input by each model in the old scene may be compared with feature vectors extractable from training samples in the new scene, so as to determine whether each model in the old scene may migrate to the new scene. Specifically, a first feature set is obtained, where the first feature set includes: a plurality of feature vectors which can be extracted by a predetermined new scene training sample; then for any model deployed in the old scenario: obtaining a second feature set, the set comprising: a plurality of feature vectors input by the model; determining the model as a model to be migrated under the condition that the model accords with a preset migration rule; the preset migration rule comprises the following steps: and the feature vector included in the intersection of the first feature set and the second feature set meets a preset migration condition.
The preset migration condition may be in various forms, and the first feature set and the second feature set may be compared from various angles.
For example, the preset migration condition may be: and determining whether the model can be migrated to a new scene or not by comparing the number of the feature vectors in the intersection of the first feature set and the second feature set. If the number of the feature vectors in the intersection is small, the probability that the model is poor in performance in a new scene is high, and therefore the model can not be transferred to the new scene; otherwise, the model is considered to be migrated to the new scene.
For another example, some feature vectors in the new scene are important for model training, and when it is determined whether the model in the old scene is suitable for being migrated to the new scene, whether the feature vectors are included may be considered in an important manner, so the preset migration condition may be: and the weighted score calculated according to the preset weight of each feature vector in the intersection is not less than a preset threshold value. The feature vectors that are more important for model training may be preset with higher weights, and the more important the preset weights are, the higher the weights are. Thus, if the significant feature vectors included in the intersection are higher, the final computed weighted score is also higher, and it can be considered that the model can migrate to a new scene.
The preset migration conditions may also be in other forms, and each migration condition may be used alone or in combination, and those skilled in the art may flexibly set the migration conditions according to actual needs, which is not specifically limited in this specification.
In addition, other specific rules may be included in the preset migration rule. The type of the prediction model trained in the new scenario may be determined and specified by a developer in advance according to experience or an algorithm, and then, in order to further measure each model in the old scenario on the basis of comparing the feature vectors, whether the model can be migrated to the new scenario, when obtaining a set of models to be migrated, at least one type specified in advance for the prediction model of the new scenario may be further obtained, and the preset migration rule may further include: the at least one type that is pre-specified includes a type of the model.
And determining whether a certain model in the old scene can be migrated to the new scene or not from two dimensions of the feature vector and the model type, so that the model migrated to the new scene can be better applied to the new scene through further training. Of course, the preset migration rule may also include rules of other dimensions, which is not limited in the embodiment of the present specification.
Of course, the model to be migrated from the old scene to the new scene may also be specified by the research and development staff, and when the research and development staff specify, whether each model can be migrated to the new scene, the performance after migration, and the like may also be measured by dimensions such as feature vectors, model types, and the like according to experience or an algorithm.
S102, selecting at least one model from the set of models to be migrated for prediction labeling of unlabeled samples in a new scene;
s103, obtaining an initial training sample set in a new scene, wherein the initial training sample set comprises unlabeled samples;
s104, adding a prediction label for the label-free sample in the initial training sample set by using the selected model;
for convenience of description, S102 to S104 are explained in combination.
When training samples are based on supervised learning, the training samples are required to be labeled samples. Training samples can often be labeled in a variety of ways. For example, manual labeling can be performed, and is generally accurate, but training sample data volume for model training is generally large, and manual labeling efficiency is low; for another example, in some scenarios, the label may be generated according to actual situations, such as a credit card scenario, when a bank verifies that a credit card is stolen, the credit card and the corresponding transaction may be marked as a black sample, but in such a scenario, the black sample label may not be available in a short period of time.
In the embodiment of the description, at least one model is selected from a set of models to be migrated and is used for predicting and labeling unlabeled samples in a new scene, so that the labeling efficiency is improved, and the labeling period is shortened.
Each model to be migrated is a model that can be migrated to a new scene, but specifically, because a difference exists between a feature vector input by each model and a model type, a part of the models can be directly and better applied to the new scene, and a part of the models can be better applied to the new scene only after being updated, so that a part of the models that better appear in the new scene can be selected from a set of models to be migrated for prediction tagging.
The selection of at least one model from the set of models to be migrated may specifically be achieved in various ways.
In an embodiment of the present specification, a third feature set may be obtained first, the set including: a plurality of pre-specified feature vectors for predicting sample labels in a new scene; then, obtaining each feature set corresponding to each model to be migrated, wherein any feature set comprises: a plurality of feature vectors input by the corresponding model; and selecting at least one model from the set of models to be migrated according to a preset selection rule.
Similarly to the determination of the model to be migrated in S101, when selecting the model for prediction labeling, whether to select a certain model for prediction labeling may also be measured from dimensions such as the number of feature vectors in the intersection, the number of important feature vectors, and whether the model types are the same, which is not described herein again.
In addition, only by determining whether the number or the weighted score is greater than a preset threshold value and whether the model types are equal to each other, there may be no model conforming to a preset selection rule in the set of models to be migrated, and therefore, various priority ordering conditions may be preset, and 1 or more models may be selected for prediction labeling according to the ordering result.
As described in S101, the model to be migrated may be specified by a developer, and in this step, when a model for performing predictive labeling on a unlabeled sample in a new scene is selected from the set of models to be migrated, the model may also be selected by the developer according to experience or an algorithm, which is not described herein again.
The initial training sample set in the new scene may include unlabeled samples to be labeled with prediction labels, or may include labeled samples (which may be white samples and/or black samples) labeled with actual labels, and the selected model is used to perform prediction labeling on the unlabeled samples.
The predictive tag may specifically be added in a number of ways.
In the embodiment of the present specification, the correspondence between different values and different prediction labels may be preset, where a value is greater than or less than a preset value, and corresponds to a black sample label, and conversely, corresponds to a white sample label. For any model selected: inputting unlabeled samples in the initial training sample set into the model to obtain an output predicted value; for any unlabeled sample entered: determining the weight of the predicted value output by each model; calculating the weighted sum of the predicted values, and determining a predicted label corresponding to the weighted sum; the prediction tag is added to the unlabeled exemplar.
For example, if only 1 model is selected from the set of models to be migrated for prediction labeling, the corresponding prediction label can be obtained directly according to the prediction value output by the model (i.e. equal to the weighted sum).
For another example, if multiple models are selected from the set of models to be migrated for prediction labeling, the weights corresponding to the output values of the models may be preset, for example, the weights corresponding to the better performing models are higher or lower, or of course, the weights of the models may be preset to be the same, that is, the weights are not set for the models.
In addition, the accuracy of the prediction label can be improved by manual inspection and correction by using the prediction label added by the selected model.
The above situation can be flexibly set in the field according to the actual situation, and the present specification is not limited.
And S105, updating the model to be migrated by utilizing the initial training sample set added with the prediction label based on a supervised learning algorithm to obtain the model applicable to the new scene.
When the model to be migrated is updated, only the initial training samples may be collected, and the training samples to which the prediction labels have been added may be input into the model to be migrated.
If the number of the training samples accumulated in the new scene is small, a training sample set in the old scene can be obtained, wherein the training sample set comprises labeled samples added with actual labels; and merging the initial sample set in the new scene with the training sample set in the old scene, and updating the model to be migrated by using the merged training sample set based on a supervised learning algorithm.
A large number of training samples are already accumulated in the old scene, and the training samples are labeled samples added with actual labels, so that the method can be used for assisting the updating of the model to be migrated in the new scene under the condition that the number of the training samples accumulated in the new scene is small.
Certainly, the training samples in the old scene are not necessarily completely suitable for model updating of the new scene, where there may be some samples with higher similarity to the training samples in the new scene and other samples with lower similarity, so different weights may be preset for different training samples in the training sample set after the initial sample set in the new scene and the training sample set in the old scene are combined.
For example, the training samples in the initial sample set have the highest weight, the training samples in the old scene have the next highest weight to the training samples in the initial sample set, and the training samples with lower similarity have the lowest weight.
In addition, as time goes on, labeled samples added with actual labels are accumulated in the new scene, so that an optimized training sample set is formed, and the optimized training sample set in the new scene can be obtained, wherein the optimized training sample set comprises the labeled samples added with actual labels; and merging the initial training sample set added with the prediction label and the optimized training sample set added with the actual label, and updating the model to be migrated by utilizing the merged training sample set based on a supervised learning algorithm.
It can be understood that, according to the demand of the new scene for the prediction model, each model to be migrated may be directly applied to the new scene, and updated according to the scheme while being applied, so as to obtain a model more suitable for the new scene, or may be applied to the new scene after being updated for a period of time, and may also be continuously updated after being applied, which is not limited in this specification.
The method for training the prediction model for the new scene provided in the present specification is described below with reference to a more specific example.
In the field of financial risk control, a large amount of accumulated transaction data can be used as sample data, and a wind control model is trained through machine learning, so that risk decision and the like can be timely and accurately carried out on new transactions based on the trained wind control model.
However, when a risk control model is built in a new scene, a long time is often required to accumulate a large amount of sample data required for training the model. For example, the sample data size is generally related to the transaction amount and the accumulated time of a new scene, and a certain amount of black sample data needs to be included in a sample training set.
Aiming at the problems, the existing wind control model in the old scene can be migrated to the new scene.
The new scene and the old scene may be trade markets of different countries and regions, and the wind control model deployed and used in the old scene may include: the system comprises a card-stealing wind control model, a card-stealing account wind control model, a hidden case identification model and the like, and the wind control models can be obtained by training transaction data based on multiple countries and regions.
As shown in fig. 2, multiple models that can be deployed for use in new and old scenes can be trained in the cloud in advance based on data gathered from the old scenes.
The card stealing and account stealing wind control models respectively carry out risk control according to the situations of stealing credit cards and stealing payment accounts and can carry out supervised learning training.
The hidden case identification model is used for identifying the transaction of a bank which is not determined as a case (namely, a non-obvious case) but has case characteristics by inputting a feature vector with stronger pertinence.
For example, if a plurality of credit cards or payment accounts are used simultaneously in the same device (such as a mobile phone) or the same network environment, the risk of batch card stealing and account stealing in the device or environment is high; for another example, for devices, accounts, credit cards, network environments, etc. listed as blacklists, the risk of card stealing and account stealing is high for the devices, accounts, credit cards, network environments, etc. associated with the blacklists; for example, in a device, an account, a credit card, a network environment, and the like, which have performed abnormal transactions (such as abnormality in transaction amount, transaction time, transaction location, and the like), the risk of card theft and account theft is high; the hidden pattern recognition model may recognize the corresponding transaction as a black sample based on the above features.
Moreover, the hidden case recognition model can be trained through unsupervised learning, so that the hidden case recognition model can be applied to scenes without actual cases (labels).
When a card-stealing pneumatic control model, a account-stealing pneumatic control model and a hidden case identification model need to be deployed in a new scene, the models can be issued to the local part of the new scene in the form of model files. And the deployed model can be directly used locally to score transaction events, make risk decisions and the like.
The cloud issuing deployment model is obtained by training of training samples of multiple countries and regions, and has the advantages of comprehensive training samples and high universality.
After each model is deployed to the local of a new scene and used, the model can be divided into a plurality of stages from the point of training sample accumulation.
In the first stage, it can be considered that the accumulation time in the new scene is short, for example, in 1 week after deployment, fewer training samples are accumulated, and each sample has no label, so that the model cannot be updated. Thus, in the first phase, transactions in the new scene are weathered using cloud-trained and un-updated models.
In the second stage, for example, between 1 week and 1 month after deployment, it can be considered that a certain amount of training samples are accumulated in the new scene to form an initial training sample set, and if a large amount of training data in the old scene sent by the cloud is combined, each model can be updated. However, because the period of processing card stealing and account stealing by the financial institution is long, no labeled sample with an actual label is accumulated at this time, and therefore, a prediction sample can be added to the initial training sample set through the hidden case identification model.
In addition, different weights can be set for training samples in new and old scenes, for example, the new scene is the market of malaysia, and the old scene comprises the markets of thailand, usa, japan, and the like, wherein thailand is closer to the consumption level and habit of malaysia, the similarity of transaction data is higher, and the similarity of transaction data of usa, japan and malaysia is lower. Therefore, it is possible to set the highest weight for training samples accumulated locally in malaysia, set a higher weight for training samples from thailand, and set a lower weight for training samples from the united states, japan. Therefore, by means of dynamic weighting, each model after updating training can be made to be suitable for a new scene under the condition that data in the new scene are less.
The models updated in the second stage can still be used for trading decisions for new scenarios.
In the third phase, such as after 1 month of deployment, it can be considered that a sufficient amount of training samples have accumulated in the new scene, and that labeled samples with actual labels have accumulated, then the models can be further updated. The training samples used for updating may include only the training samples with actual labels in the new scene, may also include the training samples with prediction labels added by the hidden pattern recognition model in the new scene, may also include a large number of training samples in the old scene, and so on.
Except for the model pre-trained through the cloud and the accumulated data, the wind control model is deployed and updated in the new scene, and the accumulated data in the new scene can also be uploaded to the cloud so as to be used for updating the existing model, training other new models, deploying to other new scenes and the like.
Therefore, by applying the scheme, the model deployed and used in the old scene can be migrated to the new scene, and the accumulation time of the sample in the new scene is short, so that the label prediction is performed through the model to be migrated under the condition that the sample has no or only a few actual labels, so that the model to be migrated is further optimized, the models are more suitable for being used in the new scene, and a more efficient and accurate prediction model training scheme is provided for the new scene.
Corresponding to the above method embodiment, an embodiment of the present specification further provides a prediction model training apparatus for a new scenario, and referring to fig. 3, the apparatus may include:
a to-be-migrated model obtaining module 110, configured to obtain a set of to-be-migrated models, where the to-be-migrated model is: deploying a model which is used in an old scene and can be migrated to a new scene;
an annotation model selection module 120, configured to select at least one model from the set of models to be migrated, so as to perform predictive annotation on an unlabeled sample in a new scene;
a sample set obtaining module 130, configured to obtain an initial training sample set in a new scene, where the initial training sample set includes unlabeled samples;
a sample labeling module 140, configured to add a prediction label to an unlabeled sample in the initial training sample set by using the selected model;
and the model updating module 150 is configured to update the model to be migrated based on a supervised learning algorithm by using the initial training sample set to which the prediction tag is added, so as to obtain a model applicable to a new scene.
In a specific embodiment provided in this specification, the to-be-migrated model obtaining module 110 may include:
a to-be-migrated feature obtaining unit 111, configured to obtain a first feature set, where the set includes: a plurality of feature vectors which can be extracted by a predetermined new scene training sample; for any model deployed in the old scenario: obtaining a second feature set, the set comprising: a plurality of feature vectors input by the model;
a model to be migrated selecting unit 112, configured to determine the model as a model to be migrated when the model meets a preset migration rule; the preset migration rule comprises the following steps: and the feature vector included in the intersection of the first feature set and the second feature set meets a preset migration condition.
In a specific embodiment provided in this specification, the preset migration condition may include:
the number of the feature vectors in the intersection is not less than a preset threshold; and/or the weighted score calculated according to the preset weight of each feature vector included in the intersection is not less than a preset threshold.
In a specific implementation manner provided in this specification, the module to be migrated obtaining module 110 may further include: a to-be-migrated type obtaining unit 113 configured to obtain at least one type specified in advance for the new scene prediction model;
the preset migration rule may further include: the at least one type that is pre-specified includes a type of the model.
In a specific embodiment provided in this specification, the annotation model selecting module 120 may include:
an annotated feature obtaining unit 121, configured to obtain a third feature set, where the set includes: a plurality of pre-specified feature vectors for predicting sample labels in a new scene; obtaining each feature set corresponding to each model to be migrated, wherein any feature set comprises: a plurality of feature vectors input by the corresponding model;
and the annotation model selecting unit 122 is configured to select at least one model from the set of models to be migrated according to a preset selection rule.
In one embodiment provided in the present specification, the sample labeling module 140 may include:
a predicted value determination unit 141 for, for any one of the selected models: inputting unlabeled samples in the initial training sample set into the model to obtain an output predicted value;
a predicted label determination unit 142, configured to, for any unlabeled exemplar input: determining the weight of the predicted value output by each model; calculating the weighted sum of the predicted values, and determining a predicted label corresponding to the weighted sum; the prediction tag is added to the unlabeled exemplar.
In an embodiment provided in this specification, the sample set obtaining module 130 may be further configured to: obtaining an optimized training sample set in a new scene, wherein the optimized training sample set comprises labeled samples added with actual labels;
the model updating module 150 may be specifically configured to: and merging the initial training sample set added with the prediction label and the optimized training sample set added with the actual label, and updating the model to be migrated by utilizing the merged training sample set based on a supervised learning algorithm.
In an embodiment provided in this specification, the sample set obtaining module 130 may be further configured to: obtaining a training sample set in an old scene, wherein the training sample set comprises labeled samples added with actual labels;
the model updating module 150 may be specifically configured to: and merging the initial sample set in the new scene with the training sample set in the old scene, and updating the model to be migrated by using the merged training sample set based on a supervised learning algorithm.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
Embodiments of the present specification also provide a computer device, which at least includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the aforementioned prediction model training method for a new scene when executing the program. The method at least comprises the following steps:
obtaining a set of models to be migrated, wherein the models to be migrated are as follows: deploying a model which is used in an old scene and can be migrated to a new scene;
selecting at least one model from the set of models to be migrated for predictive labeling of unlabeled samples in a new scene;
obtaining an initial training sample set in a new scene, wherein the initial training sample set comprises unlabeled samples;
adding a prediction label for the label-free sample in the initial training sample set by using the selected model;
and updating the model to be migrated by utilizing the initial training sample set added with the prediction label based on a supervised learning algorithm to obtain the model applicable to the new scene.
Fig. 4 is a schematic diagram illustrating a more specific hardware structure of a computing device according to an embodiment of the present disclosure, where the computing device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein the processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 are communicatively coupled to each other within the device via bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1020 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 1020 and called to be executed by the processor 1010.
The input/output interface 1030 is used for connecting an input/output module to input and output information. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 1040 is used for connecting a communication module (not shown in the drawings) to implement communication interaction between the present apparatus and other apparatuses. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 1050 includes a path that transfers information between various components of the device, such as processor 1010, memory 1020, input/output interface 1030, and communication interface 1040.
It should be noted that although the above-mentioned device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
Embodiments of the present specification also provide a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the aforementioned prediction model training method for a new scene. The method at least comprises the following steps:
obtaining a set of models to be migrated, wherein the models to be migrated are as follows: deploying a model which is used in an old scene and can be migrated to a new scene;
selecting at least one model from the set of models to be migrated for predictive labeling of unlabeled samples in a new scene;
obtaining an initial training sample set in a new scene, wherein the initial training sample set comprises unlabeled samples;
adding a prediction label for the label-free sample in the initial training sample set by using the selected model;
and updating the model to be migrated by utilizing the initial training sample set added with the prediction label based on a supervised learning algorithm to obtain the model applicable to the new scene.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
From the above description of the embodiments, it is clear to those skilled in the art that the embodiments of the present disclosure can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the embodiments of the present specification may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments of the present specification.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described apparatus embodiments are merely illustrative, and the modules described as separate components may or may not be physically separate, and the functions of the modules may be implemented in one or more software and/or hardware when implementing the embodiments of the present disclosure. And part or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing is only a specific embodiment of the embodiments of the present disclosure, and it should be noted that, for those skilled in the art, a plurality of modifications and decorations can be made without departing from the principle of the embodiments of the present disclosure, and these modifications and decorations should also be regarded as the protection scope of the embodiments of the present disclosure.

Claims (17)

1. A method of predictive model training for a new scene, the method comprising:
obtaining a set of models to be migrated from a cloud, wherein the models to be migrated are as follows: deploying a model which is used in an old scene and can be migrated to a new scene; the intersection of the feature vector of the model to be migrated input in the old scene and the feature vector extractable by the training sample in the new scene meets the preset migration condition;
selecting at least one model from the set of models to be migrated for predictive labeling of unlabeled samples in a new scene;
obtaining an initial training sample set in a new scene, wherein the initial training sample set comprises unlabeled samples;
adding a prediction label for the label-free sample in the initial training sample set by using the selected model;
updating the model to be migrated by using the initial training sample set added with the prediction label based on a supervised learning algorithm to obtain a model applicable to a new scene, and uploading data accumulated in the new scene to a cloud.
2. The method of claim 1, the obtaining a set of models to be migrated, comprising:
obtaining a first feature set, the set comprising: a plurality of feature vectors which can be extracted by a predetermined new scene training sample;
for any model deployed in the old scenario:
obtaining a second feature set, the set comprising: a plurality of feature vectors input by the model;
determining the model as a model to be migrated under the condition that the model accords with a preset migration rule; the preset migration rule comprises the following steps: and the feature vector included in the intersection of the first feature set and the second feature set meets a preset migration condition.
3. The method of claim 2, the preset migration condition comprising:
the number of the feature vectors in the intersection is not less than a preset threshold;
and/or
And the weighted score calculated according to the preset weight of each feature vector in the intersection is not less than a preset threshold value.
4. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
the obtaining of the set of models to be migrated further includes: obtaining at least one type specified for a new scene prediction model in advance;
the preset migration rule further comprises: the at least one type that is pre-specified includes a type of the model.
5. The method of claim 1, the selecting at least one model from the set of models to be migrated, comprising:
obtaining a third feature set, the set comprising: a plurality of pre-specified feature vectors for predicting sample labels in a new scene;
obtaining each feature set corresponding to each model to be migrated, wherein any feature set comprises: a plurality of feature vectors input by the corresponding model;
and selecting at least one model from the set of models to be migrated according to a preset selection rule.
6. The method of claim 1, the using the selected model to add predictive labels to unlabeled samples in the initial set of training samples, comprising:
for any model selected: inputting unlabeled samples in the initial training sample set into the model to obtain an output predicted value;
for any unlabeled sample entered: determining the weight of the predicted value output by each model; calculating the weighted sum of the predicted values, and determining a predicted label corresponding to the weighted sum; the prediction tag is added to the unlabeled exemplar.
7. The method of claim 1, wherein updating the model to be migrated based on a supervised learning algorithm using the initial training sample set to which the predictive label has been added comprises:
obtaining an optimized training sample set in a new scene, wherein the optimized training sample set comprises labeled samples added with actual labels;
and merging the initial training sample set added with the prediction label and the optimized training sample set added with the actual label, and updating the model to be migrated by utilizing the merged training sample set based on a supervised learning algorithm.
8. The method of claim 1, wherein updating the model to be migrated based on a supervised learning algorithm using the initial training sample set to which the predictive label has been added comprises:
obtaining a training sample set in an old scene, wherein the training sample set comprises labeled samples added with actual labels;
and merging the initial sample set in the new scene with the training sample set in the old scene, and updating the model to be migrated by using the merged training sample set based on a supervised learning algorithm.
9. An apparatus for predictive model training for a new scene, the apparatus comprising:
the model to be migrated obtaining module is used for obtaining a set of models to be migrated from a cloud, and the models to be migrated are as follows: deploying a model which is used in an old scene and can be migrated to a new scene; the intersection of the feature vector of the model to be migrated input in the old scene and the feature vector extractable by the training sample in the new scene meets the preset migration condition;
the labeling model selecting module is used for selecting at least one model from the set of the models to be migrated so as to predict and label the unlabeled samples in the new scene;
the system comprises a sample set acquisition module, a sample set acquisition module and a data processing module, wherein the sample set acquisition module is used for acquiring an initial training sample set in a new scene, and the initial training sample set comprises unlabeled samples;
the sample labeling module is used for adding a prediction label to the unlabeled sample in the initial training sample set by using the selected model;
and the model updating module is used for updating the model to be migrated by utilizing the initial training sample set added with the prediction label based on a supervised learning algorithm to obtain a model applicable to a new scene, and uploading data accumulated in the new scene to the cloud.
10. The apparatus of claim 9, the to-be-migrated model obtaining module comprising:
a to-be-migrated feature obtaining unit, configured to obtain a first feature set, where the first feature set includes: a plurality of feature vectors which can be extracted by a predetermined new scene training sample; for any model deployed in the old scenario: obtaining a second feature set, the set comprising: a plurality of feature vectors input by the model;
the model to be migrated selecting unit is used for determining the model as the model to be migrated under the condition that the model accords with the preset migration rule; the preset migration rule comprises the following steps: and the feature vector included in the intersection of the first feature set and the second feature set meets a preset migration condition.
11. The apparatus of claim 10, the preset migration condition comprising:
the number of the feature vectors in the intersection is not less than a preset threshold;
and/or
And the weighted score calculated according to the preset weight of each feature vector in the intersection is not less than a preset threshold value.
12. The apparatus of claim 10, wherein the first and second electrodes are disposed on opposite sides of the substrate,
the module for obtaining the model to be migrated further comprises: the device comprises a to-be-migrated type acquisition unit, a migration prediction unit and a migration prediction unit, wherein the to-be-migrated type acquisition unit is used for acquiring at least one type which is specified for a new scene prediction model in advance;
the preset migration rule further comprises: the at least one type that is pre-specified includes a type of the model.
13. The apparatus of claim 9, the annotation model selection module, comprising:
a labeled feature obtaining unit, configured to obtain a third feature set, where the third feature set includes: a plurality of pre-specified feature vectors for predicting sample labels in a new scene; obtaining each feature set corresponding to each model to be migrated, wherein any feature set comprises: a plurality of feature vectors input by the corresponding model;
and the marking model selecting unit is used for selecting at least one model from the set of the models to be migrated according to a preset selection rule.
14. The apparatus of claim 9, the sample labeling module, comprising:
a predicted value determination unit for, for any one of the selected models: inputting unlabeled samples in the initial training sample set into the model to obtain an output predicted value;
a predicted label determination unit for, for any unlabeled exemplar input: determining the weight of the predicted value output by each model; calculating the weighted sum of the predicted values, and determining a predicted label corresponding to the weighted sum; the prediction tag is added to the unlabeled exemplar.
15. The apparatus of claim 9, wherein the first and second electrodes are disposed on opposite sides of the substrate,
the sample set acquisition module is further configured to: obtaining an optimized training sample set in a new scene, wherein the optimized training sample set comprises labeled samples added with actual labels;
the model update module is specifically configured to: and merging the initial training sample set added with the prediction label and the optimized training sample set added with the actual label, and updating the model to be migrated by utilizing the merged training sample set based on a supervised learning algorithm.
16. The apparatus of claim 9, wherein the first and second electrodes are disposed on opposite sides of the substrate,
the sample set acquisition module is further configured to: obtaining a training sample set in an old scene, wherein the training sample set comprises labeled samples added with actual labels;
the model update module is specifically configured to: and merging the initial sample set in the new scene with the training sample set in the old scene, and updating the model to be migrated by using the merged training sample set based on a supervised learning algorithm.
17. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 8 when executing the program.
CN201810875574.1A 2018-08-03 2018-08-03 Prediction model training method and device for new scene Active CN109359793B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201810875574.1A CN109359793B (en) 2018-08-03 2018-08-03 Prediction model training method and device for new scene
TW108119499A TWI818999B (en) 2018-08-03 2019-06-05 Predictive model training method and device for new scenarios
PCT/CN2019/091658 WO2020024716A1 (en) 2018-08-03 2019-06-18 Method and device for training prediction model for new scenario

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810875574.1A CN109359793B (en) 2018-08-03 2018-08-03 Prediction model training method and device for new scene

Publications (2)

Publication Number Publication Date
CN109359793A CN109359793A (en) 2019-02-19
CN109359793B true CN109359793B (en) 2020-11-17

Family

ID=65349816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810875574.1A Active CN109359793B (en) 2018-08-03 2018-08-03 Prediction model training method and device for new scene

Country Status (3)

Country Link
CN (1) CN109359793B (en)
TW (1) TWI818999B (en)
WO (1) WO2020024716A1 (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359793B (en) * 2018-08-03 2020-11-17 创新先进技术有限公司 Prediction model training method and device for new scene
CN110033276A (en) * 2019-03-08 2019-07-19 阿里巴巴集团控股有限公司 It is a kind of for security strategy generation method, device and the equipment transferred accounts
CN111797289A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Model processing method and device, storage medium and electronic equipment
CN110083834B (en) * 2019-04-24 2023-05-09 北京百度网讯科技有限公司 Semantic matching model training method and device, electronic equipment and storage medium
CN110232403B (en) * 2019-05-15 2024-02-27 腾讯科技(深圳)有限公司 Label prediction method and device, electronic equipment and medium
CN110263824B (en) * 2019-05-29 2023-09-05 创新先进技术有限公司 Model training method, device, computing equipment and computer readable storage medium
CN110163296B (en) * 2019-05-29 2020-12-18 北京达佳互联信息技术有限公司 Image recognition method, device, equipment and storage medium
CN110390425A (en) * 2019-06-20 2019-10-29 阿里巴巴集团控股有限公司 Prediction technique and device
CN110458393B (en) * 2019-07-05 2023-07-18 创新先进技术有限公司 Method and device for determining risk identification scheme and electronic equipment
CN110689135B (en) * 2019-09-05 2022-10-11 第四范式(北京)技术有限公司 Anti-money laundering model training method and device and electronic equipment
CN110838020B (en) * 2019-09-16 2023-06-23 平安科技(深圳)有限公司 Recommendation method and device based on vector migration, computer equipment and storage medium
CN110765876A (en) * 2019-09-20 2020-02-07 深圳码隆科技有限公司 Training method and device for commodity recognition model
CN112581250B (en) * 2019-09-30 2023-12-29 深圳无域科技技术有限公司 Model generation method, device, computer equipment and storage medium
CN110705717B (en) * 2019-09-30 2022-05-17 支付宝(杭州)信息技术有限公司 Training method, device and equipment of machine learning model executed by computer
CN110928889A (en) * 2019-10-23 2020-03-27 深圳市华讯方舟太赫兹科技有限公司 Training model updating method, device and computer storage medium
CN110910864B (en) * 2019-10-24 2023-02-03 深圳追一科技有限公司 Training sample selection method and device, computer equipment and storage medium
CN111062563A (en) * 2019-11-08 2020-04-24 支付宝(杭州)信息技术有限公司 Risk prediction model training method, risk prediction method and related device
CN112861892B (en) * 2019-11-27 2023-09-01 杭州海康威视数字技术股份有限公司 Method and device for determining attribute of object in picture
CN111309715B (en) * 2020-01-15 2023-04-18 腾讯科技(深圳)有限公司 Call scene identification method and device
CN111352965B (en) * 2020-02-18 2023-09-08 腾讯科技(深圳)有限公司 Training method of sequence mining model, and processing method and equipment of sequence data
CN111428783B (en) * 2020-03-23 2022-06-21 支付宝(杭州)信息技术有限公司 Method and device for performing sample domain conversion on training samples of recommendation model
CN111488972B (en) * 2020-04-09 2023-08-08 北京百度网讯科技有限公司 Data migration method, device, electronic equipment and storage medium
CN111598338B (en) * 2020-05-18 2021-08-31 贝壳找房(北京)科技有限公司 Method, apparatus, medium, and electronic device for updating prediction model
CN113780314A (en) * 2020-05-20 2021-12-10 阿里巴巴集团控股有限公司 Classification model training method, device and system
CN111859872A (en) * 2020-07-07 2020-10-30 中国建设银行股份有限公司 Text labeling method and device
CN112036509A (en) * 2020-09-30 2020-12-04 北京百度网讯科技有限公司 Method and apparatus for training image recognition models
CN112270545A (en) * 2020-10-27 2021-01-26 上海淇馥信息技术有限公司 Financial risk prediction method and device based on migration sample screening and electronic equipment
CN112258068B (en) * 2020-10-30 2022-03-29 武汉理工光科股份有限公司 Security activity scheme generation and dynamic update method and device
CN112417485B (en) * 2020-11-30 2024-02-02 支付宝(杭州)信息技术有限公司 Model training method, system and device based on trusted execution environment
CN112417767B (en) * 2020-12-09 2024-02-27 东软睿驰汽车技术(沈阳)有限公司 Attenuation trend determination model construction method and attenuation trend determination method
CN112711938B (en) * 2021-03-26 2021-07-06 北京沃丰时代数据科技有限公司 Reading understanding model construction method and device, electronic equipment and storage medium
CN115396831A (en) * 2021-05-08 2022-11-25 中国移动通信集团浙江有限公司 Interaction model generation method, device, equipment and storage medium
CN113325855B (en) * 2021-08-02 2021-11-30 北京三快在线科技有限公司 Model training method for predicting obstacle trajectory based on migration scene
CN113780578B (en) * 2021-09-08 2023-12-12 北京百度网讯科技有限公司 Model training method, device, electronic equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799889A (en) * 2011-05-23 2012-11-28 索尼公司 Learning device, learning method, and program
CN106991439A (en) * 2017-03-28 2017-07-28 南京天数信息科技有限公司 Image-recognizing method based on deep learning and transfer learning
CN108304936A (en) * 2017-07-12 2018-07-20 腾讯科技(深圳)有限公司 Machine learning model training method and device, facial expression image sorting technique and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI553573B (en) * 2014-05-15 2016-10-11 財團法人工業技術研究院 Aspect-sentiment analysis and viewing system, device therewith and method therefor
US20160253597A1 (en) * 2015-02-27 2016-09-01 Xerox Corporation Content-aware domain adaptation for cross-domain classification
US10504029B2 (en) * 2015-06-30 2019-12-10 Microsoft Technology Licensing, Llc Personalized predictive models
CN106599922B (en) * 2016-12-16 2021-08-24 中国科学院计算技术研究所 Transfer learning method and system for large-scale data calibration
US10776693B2 (en) * 2017-01-31 2020-09-15 Xerox Corporation Method and system for learning transferable feature representations from a source domain for a target domain
CN107679859B (en) * 2017-07-18 2020-08-25 中国银联股份有限公司 Risk identification method and system based on migration deep learning
CN107895177B (en) * 2017-11-17 2021-08-03 南京邮电大学 Transfer classification learning method for keeping image classification sparse structure
CN108021931A (en) * 2017-11-20 2018-05-11 阿里巴巴集团控股有限公司 A kind of data sample label processing method and device
CN107944874B (en) * 2017-12-13 2021-07-20 创新先进技术有限公司 Wind control method, device and system based on transfer learning
CN109359793B (en) * 2018-08-03 2020-11-17 创新先进技术有限公司 Prediction model training method and device for new scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799889A (en) * 2011-05-23 2012-11-28 索尼公司 Learning device, learning method, and program
CN106991439A (en) * 2017-03-28 2017-07-28 南京天数信息科技有限公司 Image-recognizing method based on deep learning and transfer learning
CN108304936A (en) * 2017-07-12 2018-07-20 腾讯科技(深圳)有限公司 Machine learning model training method and device, facial expression image sorting technique and device

Also Published As

Publication number Publication date
CN109359793A (en) 2019-02-19
TW202008237A (en) 2020-02-16
WO2020024716A1 (en) 2020-02-06
TWI818999B (en) 2023-10-21

Similar Documents

Publication Publication Date Title
CN109359793B (en) Prediction model training method and device for new scene
CN110278175B (en) Graph structure model training and garbage account identification method, device and equipment
CN108171260B (en) Picture identification method and system
CN109214421B (en) Model training method and device and computer equipment
CN111553488B (en) Risk recognition model training method and system for user behaviors
CN108596410B (en) Automatic wind control event processing method and device
CN107590690B (en) Data processing method and device and server
CN113361593B (en) Method for generating image classification model, road side equipment and cloud control platform
CN111144648A (en) People flow prediction equipment and method
CN114580263A (en) Knowledge graph-based information system fault prediction method and related equipment
CN112288572B (en) Service data processing method and computer equipment
CN109102324B (en) Model training method, and red packet material laying prediction method and device based on model
CN111369258A (en) Entity object type prediction method, device and equipment
CN111079944A (en) Method and device for realizing interpretation of transfer learning model, electronic equipment and storage medium
CN112966113A (en) Data risk prevention and control method, device and equipment
CN111383030A (en) Transaction risk detection method, device and equipment
CN110490058B (en) Training method, device and system of pedestrian detection model and computer readable medium
CN112328869A (en) User loan willingness prediction method and device and computer system
CN114187009A (en) Feature interpretation method, device, equipment and medium of transaction risk prediction model
CN108446738A (en) A kind of clustering method, device and electronic equipment
CN116308738B (en) Model training method, business wind control method and device
CN111798263A (en) Transaction trend prediction method and device
CN111523995B (en) Method, device and equipment for determining characteristic value of model migration
CN111898626B (en) Model determination method and device and electronic equipment
CN112561569B (en) Dual-model-based store arrival prediction method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant