CN112015562A - Resource allocation method and device based on transfer learning and electronic equipment - Google Patents

Resource allocation method and device based on transfer learning and electronic equipment Download PDF

Info

Publication number
CN112015562A
CN112015562A CN202011159526.6A CN202011159526A CN112015562A CN 112015562 A CN112015562 A CN 112015562A CN 202011159526 A CN202011159526 A CN 202011159526A CN 112015562 A CN112015562 A CN 112015562A
Authority
CN
China
Prior art keywords
user
users
basic data
model
generate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011159526.6A
Other languages
Chinese (zh)
Inventor
张国光
宋孟楠
苏绥绥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qilu Information Technology Co Ltd
Original Assignee
Beijing Qilu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qilu Information Technology Co Ltd filed Critical Beijing Qilu Information Technology Co Ltd
Priority to CN202011159526.6A priority Critical patent/CN112015562A/en
Publication of CN112015562A publication Critical patent/CN112015562A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The disclosure relates to a resource allocation method, device, electronic equipment and computer readable medium based on transfer learning. The method comprises the following steps: acquiring basic data of a user in a first preset scene; inputting the basic data into a user analysis model to generate a user score, wherein the user analysis model is generated through basic data of other users in a second preset scene and a transfer learning method; and allocating resources for the user based on the user score. According to the resource allocation method and device based on the transfer learning, the electronic equipment and the computer readable medium, related models in different application scenes can be directly multiplexed, so that resource allocation can be rapidly and accurately performed on the user, time can be saved, and resource quota can be automatically allocated to the user under the condition of lacking of a user sample, so that the resource allocation efficiency in the application scene is higher, and errors and labor cost are reduced.

Description

Resource allocation method and device based on transfer learning and electronic equipment
Technical Field
The present disclosure relates to the field of computer information processing, and in particular, to a resource allocation method and apparatus based on transfer learning, an electronic device, and a computer-readable medium.
Background
In the training process of the machine learning model, the characteristics of the sample play a decisive role in constructing the machine learning model, for example, in an internet financial service enterprise, three business departments A, B and C are provided, and users of the three business departments need to be simulated and modeled respectively. For example, in a certain internet financial service enterprise, the number of users in the department a is large, a large number of samples with labels are accumulated, the model effect obtained through training of the user samples in the department a is good, the number of users in the department B is small, the sample amount is small, and a machine learning model with a good training effect can not be obtained through only the user samples in the department B. For a company business department, although specific businesses are different, the businesses are generally different business branches in the same business background, and many common features can be shared. If the relevant model of the service A can be multiplexed to the service B for use, the time can be saved, and the problem of poor training effect of a user who uses the service B alone can be solved.
In the classical machine learning problem, a training set and a test set are always distributed uniformly, a model is trained on the training set, and a test is carried out on the test set. However, in practical problems, the test scenario is often uncontrollable, the distribution of the test set and the training set is greatly different, and at this time, a so-called over-fitting problem occurs: the model has no ideal effect on the test set. Taking face recognition as an example, if the face recognition is trained by oriental face data and used for recognizing western people, the recognition performance is obviously reduced compared with that of the oriental people. When the training set and the test set are not uniformly distributed, the model trained by the rule of minimum empirical error on the training data has poor performance on the test.
The above information disclosed in this background section is only for enhancement of understanding of the background of the disclosure and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of the above, the present disclosure provides a resource allocation method and apparatus based on transfer learning, an electronic device, and a computer-readable medium. The invention aims to solve the following problems: in some application scenarios, the training of the machine learning model cannot be completed due to a small number of user samples in the prior art, and in this case, user strategies and resources cannot be automatically and intelligently allocated to the current user, and a large amount of time and resources must be allocated by adopting a large number of manual auditing methods, which wastes a large amount of time and resources. On the other hand, the invention aims to solve the problem how to automatically allocate the resource quota to the user in the absence of the user sample, so that the resource allocation efficiency in the application scene is higher, and the error and the labor cost are reduced.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, a resource allocation method based on transfer learning is provided, where the method includes: acquiring basic data of a user in a first preset scene; inputting the basic data into a user analysis model to generate a user score, wherein the user analysis model is generated through basic data of other users in a second preset scene and a transfer learning method; and allocating resources for the user based on the user score.
Optionally, the method further comprises: generating an initial model based on the basic data of other users in a second preset scene; performing transfer learning on the initial model based on basic data of a plurality of historical users in a first preset scene to generate the user analysis model.
Optionally, generating an initial model based on the basic data of the other users in the second preset scene includes: acquiring basic data of other users in a second preset scene; generating a plurality of user feature combinations based on the base data of the other users; training a logistic regression model based on the plurality of user feature combinations to generate the initial model.
Optionally, generating a user feature combination based on the basic data of the other users includes: determining a label of the basic data of the other users; inputting the base data of the other users with labels into an extreme gradient boosting decision tree model to generate a plurality of user feature combinations.
Optionally, inputting the base data of the other users with labels into an extreme gradient boosting decision tree model to generate a plurality of user feature combinations, including: inputting the basic data of the other users with the labels into an extreme gradient lifting decision tree model to generate a calculation result; sequentially numbering leaf nodes in a tree structure in the calculation result; generating the plurality of user feature combinations based on the number of calculations and the number.
Optionally, training a logistic regression model based on the plurality of user feature combinations to generate the initial model comprises: performing a unique transformation process on the plurality of user feature combinations; inputting the processed plurality of user feature combinations into a logistic regression model for training; and generating the initial model after training.
Optionally, performing migration learning on the initial model based on basic data of a plurality of historical users in a first preset scenario to generate the user analysis model, including: acquiring basic data of a plurality of historical users in a first preset scene; training the initial model through basic data of the plurality of historical users; fine-tuning the initial model based on a training result; and when the convergence function meets a preset condition, generating the user analysis model.
Optionally, training the initial model through the basic data of the plurality of historical users includes: and sequentially inputting the basic data of each historical user in the basic data of the plurality of historical users into the initial model for training.
Optionally, allocating resources for the user based on the user score includes: comparing the user score to a plurality of threshold intervals to determine a user category for the user; and allocating resources for the users based on the user categories.
Optionally, the method further comprises: and allocating special shared resources for the user based on the user category.
According to an aspect of the present disclosure, a resource allocation apparatus based on transfer learning is provided, the apparatus including: the data module is used for acquiring basic data of a user in a first preset scene; the analysis module is used for inputting the basic data into a user analysis model to generate a user score, wherein the user analysis model is generated through basic data of other users in a second preset scene and a transfer learning method; and the allocation module is used for allocating resources for the user based on the user score.
Optionally, the method further comprises: the initial model module is used for generating an initial model based on the basic data of other users in a second preset scene; and the transfer learning module is used for carrying out transfer learning on the initial model based on the basic data of a plurality of historical users in a first preset scene so as to generate the user analysis model.
Optionally, the initial model module includes: the data unit is used for acquiring basic data of other users in a second preset scene; a combination unit, configured to generate a plurality of user feature combinations based on the basic data of the other users; an initial unit configured to train a logistic regression model based on the plurality of user feature combinations to generate the initial model.
Optionally, the combining unit is further configured to determine a tag of the basic data of the other user; inputting the base data of the other users with labels into an extreme gradient boosting decision tree model to generate a plurality of user feature combinations.
Optionally, the combining unit is further configured to input the basic data of the other users with the labels into an extreme gradient boosting decision tree model to generate a calculation result; sequentially numbering leaf nodes in a tree structure in the calculation result; generating the plurality of user feature combinations based on the number of calculations and the number.
Optionally, the initial unit is further configured to perform a unique transformation process on the plurality of user feature combinations; inputting the processed plurality of user feature combinations into a logistic regression model for training; and generating the initial model after training.
Optionally, the migration learning module includes: the history unit is used for acquiring basic data of a plurality of history users in a first preset scene; the training unit is used for training the initial model through basic data of the plurality of historical users; the fine tuning unit is used for fine tuning the initial model based on a training result; and the generating unit is used for generating the user analysis model when the convergence function meets a preset condition.
Optionally, the training unit is further configured to sequentially input the basic data of each of the plurality of historical users into the initial model for training.
Optionally, the allocation module includes: a comparing unit, configured to compare the user score with a plurality of threshold intervals to determine a user category of the user; and the allocation unit is used for allocating resources for the users based on the user categories.
Optionally, the method further comprises: and the special shared resource unit is used for allocating special shared resources for the user based on the user category.
According to an aspect of the present disclosure, an electronic device is provided, the electronic device including: one or more processors; storage means for storing one or more programs; when executed by one or more processors, cause the one or more processors to implement a method as above.
According to an aspect of the disclosure, a computer-readable medium is proposed, on which a computer program is stored, which program, when being executed by a processor, carries out the method as above.
According to the resource allocation method, device, electronic equipment and computer readable medium based on transfer learning, basic data of a user in a first preset scene are obtained; inputting the basic data into a user analysis model to generate a user score, wherein the user analysis model is generated through basic data of other users in a second preset scene and a transfer learning method; based on the user score is the mode that the user carries out resource allocation can be directly multiplexed with the relevant models under different application scenes, so that the resource allocation can be rapidly and accurately carried out for the user, the time can be saved, and the problem of poor training effect when the user data under a certain scene is used for carrying out model training independently can be solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.
Fig. 1 is a system block diagram illustrating a resource allocation method and apparatus based on transfer learning according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a resource allocation method based on transfer learning according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a resource allocation method based on transfer learning according to another exemplary embodiment.
Fig. 4 is a flowchart illustrating a resource allocation method based on transfer learning according to another exemplary embodiment.
Fig. 5 is a block diagram illustrating a resource allocation apparatus based on transfer learning according to an example embodiment.
Fig. 6 is a block diagram illustrating a resource allocation apparatus based on transfer learning according to another exemplary embodiment.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 8 is a block diagram illustrating a computer-readable medium in accordance with an example embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first component discussed below may be termed a second component without departing from the teachings of the disclosed concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It is to be understood by those skilled in the art that the drawings are merely schematic representations of exemplary embodiments, and that the blocks or processes shown in the drawings are not necessarily required to practice the present disclosure and are, therefore, not intended to limit the scope of the present disclosure.
In the present invention, resources refer to any available substances, information, time, information resources including computing resources and various types of data resources. The data resources include various private data in various domains. The innovation of the invention is how to use the information interaction technology between the server and the client to make the resource allocation process more automatic, efficient and reduce the labor cost. Thus, the present invention can be applied to the distribution of various resources including physical goods, water, electricity, and meaningful data, essentially. However, for convenience, the resource allocation is described as being implemented by taking financial data resources as an example, but those skilled in the art will understand that the present invention can also be applied to allocation of other resources.
Fig. 1 is a system block diagram illustrating a resource allocation method and apparatus based on transfer learning according to an exemplary embodiment.
As shown in fig. 1, the system architecture 10 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a financial services application, a shopping application, a web browser application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services, such as a background management server that supports financial services websites browsed by the user using the terminal apparatuses 101, 102, and 103. The backend management server may analyze and/or otherwise process the received user data and feed back the processing results (e.g., resource quotas) to the administrator of the financial services website and/or the terminal devices 101, 102, 103.
The server 105 may, for example, obtain basic data of the user in a first preset scenario; the server 105 may, for example, input the basic data into a user analysis model, which is generated by the basic data and the transfer learning method of other users in the second preset scenario, and generate a user score; server 105 may allocate resources to the user, for example, based on the user score.
The server 105 may also generate an initial model, for example, based on the base data of other users in a second preset scenario; the server 105 may also perform migration learning on the initial model to generate the user analysis model, for example, based on base data of a plurality of historical users in a first preset scenario.
The server 105 may be a single entity server, or may be composed of a plurality of servers, for example, it should be noted that the resource allocation method based on the migration learning provided by the embodiment of the present disclosure may be executed by the server 105, and accordingly, a resource allocation apparatus based on the migration learning may be disposed in the server 105. And the web page end provided for the user to browse the financial service platform is generally positioned in the terminal equipment 101, 102 and 103.
Fig. 2 is a flowchart illustrating a resource allocation method based on transfer learning according to an exemplary embodiment. The resource allocation method 20 based on the transfer learning includes at least steps S202 to S206.
As shown in fig. 2, in S202, basic data of a user in a first preset scene is acquired. The first preset scene can be a resource borrowing type A scene in the financial service platform, and the basic data of the user can comprise the gender, age, occupation, income, third-party platform borrowing condition, credit granting condition, past resource borrowing condition and the like of the user. More specifically, in the first preset scenario, the sample size of the user traffic is too small to support training of the machine learning model.
In S204, the basic data is input into a user analysis model, and a user score is generated, where the user analysis model is generated by using the basic data of other users in a second preset scene and a transfer learning method. The second preset scene can be a resource borrowing type B scene in the financial service platform, a large number of users are in the second preset scene, a model with higher accuracy can be obtained through user training in the second scene, and then the model is migrated to the first preset scene for use.
Domain Adaptation (Domain Adaptation) is a representative method in migration learning, and refers to using information-rich source Domain samples to improve the performance of a target Domain model. Two crucial concepts in the domain adaptation problem: a source domain (source domain) represents a different domain from the test sample, but has rich supervisory information; the target domain represents the domain where the test sample is located, with no or only a few labels. The source domain and the target domain tend to belong to the same class of tasks, but are distributed differently.
In the migration learning method disclosed by the disclosure, the migration learning of the models in the first preset scene and the second preset scene is mainly performed through Feature based migration (Feature based TL), the basic idea of the Feature migration is to learn a common Feature representation, and in a common Feature space, the distribution of a source domain and a target domain is as same as possible.
In S206, resource allocation is performed for the user based on the user score. The method specifically comprises the following steps: comparing the user score to a plurality of threshold intervals to determine a user category for the user; and allocating resources for the users based on the user categories.
A plurality of risk threshold value ranges can be set, when the user is in a low risk interval, resource limit increasing processing can be carried out on the user, and when the user is in a high risk interval, the resource limit of the user can be reduced.
In one embodiment, further comprising: generating a user supervision policy when the user risk value is higher than a threshold value; when the risk value of the user is higher than the preset threshold value, the user can be set as a key monitoring user so as to supervise the user in real time and prevent the resource safety risk. And when the user risk value is smaller than a threshold value, generating a user quota strategy.
In one embodiment, further comprising: and allocating special shared resources for the user based on the user category. The special resources may include a coupon, deferred return measures, and the like.
According to the resource allocation method based on the transfer learning, basic data of a user in a first preset scene are obtained; inputting the basic data into a user analysis model to generate a user score, wherein the user analysis model is generated through basic data of other users in a second preset scene and a transfer learning method; based on the user score is the mode that the user carries out resource allocation can be directly multiplexed with the relevant models under different application scenes, so that the resource allocation can be rapidly and accurately carried out for the user, the time can be saved, and the problem of poor training effect when the user data under a certain scene is used for carrying out model training independently can be solved.
It is clearly understood that this disclosure describes how to make and use specific examples, but the principles of this disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
The inventor of the present disclosure finds that, in the field of internet finance, features related to many business scenarios are generally high-dimensional and sparse, and the sample size is huge, and the model generally adopts a fast logistic regression model (LR), however, the learning capability of the LR algorithm is limited, so to obtain a good prediction result, a large amount of feature engineering processing is required in the early stage, and an engineer generally needs to spend a lot of efforts to screen and perform features and processing on the features, and then, even then, the final improvement effect on the machine learning model may be very limited.
The tree model algorithm naturally has a feature screening function, and selects an optimal splitting node during each splitting through methods such as entropy, information gain and a kini index. Therefore, after the training of the tree model is finished, the selected local optimal features are all from the root node to the leaf node of the tree. Based on the idea, the inventor of the present disclosure can improve the fitting capability of the LR algorithm by screening some locally optimal feature combinations through the feature screening function of the tree model, and then inputting the combined features into the LR algorithm. The following is a detailed description of an embodiment corresponding to fig. 3.
Fig. 3 is a flowchart illustrating a resource allocation method based on transfer learning according to another exemplary embodiment. The flow 30 shown in fig. 3 is a detailed description of "generating an initial model based on the basic data of other users in the second preset scenario".
As shown in fig. 3, in S302, basic data of other users in a second preset scene is acquired.
In S304, a plurality of user feature combinations are generated based on the base data of the other users. The method comprises the following steps: determining a label of the basic data of the other users; inputting the base data of the other users with labels into an extreme gradient boosting decision tree model to generate a plurality of user feature combinations.
The method can extract key features from the basic data to determine the label of the basic data according to the characteristics of the initial model to be generated, and more specifically, when the initial model is a model reflecting the default probability class of the user, whether the user is default or not can be determined through the user resource return time in the user basic data, and then the label is determined. When the initial model is a model reflecting the user feature class, the label may be determined for the user through the user feature class basic data in the user basic data, which is not limited in this disclosure.
In one embodiment, inputting the base data of the other users with labels into an extreme gradient boosting decision tree model to generate a plurality of user feature combinations, comprises: inputting the basic data of the other users with the labels into an extreme gradient lifting decision tree model to generate a calculation result; sequentially numbering leaf nodes in a tree structure in the calculation result; generating the plurality of user feature combinations based on the number of calculations and the number.
In S306, a logistic regression model is trained based on the plurality of user feature combinations to generate the initial model. The method comprises the following steps: performing a unique transformation process on the plurality of user feature combinations; inputting the processed plurality of user feature combinations into a logistic regression model for training; and generating the initial model after training.
More specifically, x may be an input feature, for example, that walks through two trees to a leaf node. The left tree has three leaf nodes and the right tree has two leaf nodes, so for input x, assuming it falls on the first node in the left sub-tree and on the second node in the right sub-tree, the one-hot code in the left sub-tree is [1,0,0], the one-hot code in the right sub-tree is [0,1], and the final feature is the combination of the two one-hot codes [1,0,0,0,1 ]. And finally, inputting the converted features into a linear classifier, and training to obtain a linear model based on the combined features.
When feature transformation is carried out, the tree of the tree included in the extreme gradient boosting decision tree model is the number of the subsequent combined features, the vector length of each combined feature is unequal, and the length depends on the number of leaf nodes of the tree. For example, after 100 trees are obtained by training, 100 combined features can be obtained.
Fig. 4 is a flowchart illustrating a resource allocation method based on transfer learning according to another exemplary embodiment. The process 40 shown in fig. 4 is a detailed description of "performing migration learning on the initial model based on the basic data of a plurality of historical users in a first preset scenario to generate the user analysis model".
As shown in fig. 4, in S402, basic data of a plurality of historical users in a first preset scene is acquired.
In S404, the initial model is trained through the basic data of the plurality of historical users. The method comprises the following steps: and sequentially inputting the basic data of each historical user in the basic data of the plurality of historical users into the initial model for training.
In S406, the initial model is fine-tuned based on the training results.
In S408, when the convergence function satisfies a preset condition, the user analysis model is generated. More specifically, the convergence function can be set as a return function (reward function), and if the learning agent obtains a better result after training for one step, the agent is given some return (for example, the result of the return function is positive), and obtains a worse result, the return function is negative. And evaluating each training through a return function to obtain a corresponding return function, and finally determining a path with the maximum return value (the sum of the returns of each step is maximum), so that the path is considered as the optimal model parameter.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. When executed by the CPU, performs the functions defined by the above-described methods provided by the present disclosure. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 5 is a block diagram illustrating a resource allocation apparatus based on transfer learning according to an example embodiment. As shown in fig. 5, the resource allocation apparatus 50 based on the transfer learning includes: a data module 502, an analysis module 504, and an assignment module 506.
The data module 502 is configured to obtain basic data of a user in a first preset scene;
the analysis module 504 is configured to input the basic data into a user analysis model, and generate a user score, where the user analysis model is generated by using basic data and a transfer learning method of other users in a second preset scene;
the allocation module 506 is configured to allocate resources for the user based on the user score. The assignment module 506 includes: a comparing unit, configured to compare the user score with a plurality of threshold intervals to determine a user category of the user; and the allocation unit is used for allocating resources for the users based on the user categories. And the special shared resource unit is used for allocating special shared resources for the user based on the user category.
Fig. 6 is a block diagram illustrating a resource allocation apparatus based on transfer learning according to another exemplary embodiment. As shown in fig. 6, the resource allocation apparatus 60 based on the transfer learning includes: an initial model module 602 and a migration learning module 604.
The initial model module 602 is configured to generate an initial model based on the basic data of the other users in the second preset scene; the initial model module 602 includes: the data unit is used for acquiring basic data of other users in a second preset scene; a combination unit, configured to generate a plurality of user feature combinations based on the basic data of the other users; the combination unit is further used for determining the labels of the basic data of the other users; inputting the base data of the other users with labels into an extreme gradient boosting decision tree model to generate a plurality of user feature combinations. The combination unit is further used for inputting the basic data of the other users with the labels into an extreme gradient boosting decision tree model to generate a calculation result; sequentially numbering leaf nodes in a tree structure in the calculation result; generating the plurality of user feature combinations based on the number of calculations and the number. An initial unit configured to train a logistic regression model based on the plurality of user feature combinations to generate the initial model. The initial unit is further configured to perform unique transformation processing on the plurality of user feature combinations; inputting the processed plurality of user feature combinations into a logistic regression model for training; and generating the initial model after training.
The migration learning module 604 is configured to perform migration learning on the initial model based on basic data of a plurality of historical users in a first preset scenario to generate the user analysis model. The migration learning module 604 includes: the history unit is used for acquiring basic data of a plurality of history users in a first preset scene; the training unit is used for training the initial model through basic data of the plurality of historical users; the training unit is further configured to input the basic data of each historical user in the basic data of the plurality of historical users into the initial model in sequence for training. The fine tuning unit is used for fine tuning the initial model based on a training result; and the generating unit is used for generating the user analysis model when the convergence function meets a preset condition.
According to the resource allocation device based on the transfer learning, basic data of a user in a first preset scene are obtained; inputting the basic data into a user analysis model to generate a user score, wherein the user analysis model is generated through basic data of other users in a second preset scene and a transfer learning method; based on the user score is the mode that the user carries out resource allocation can be directly multiplexed with the relevant models under different application scenes, so that the resource allocation can be rapidly and accurately carried out for the user, the time can be saved, and the problem of poor training effect when the user data under a certain scene is used for carrying out model training independently can be solved.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
An electronic device 700 according to this embodiment of the disclosure is described below with reference to fig. 7. The electronic device 700 shown in fig. 7 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 is embodied in the form of a general purpose computing device. The components of the electronic device 700 may include, but are not limited to: at least one processing unit 710, at least one memory unit 720, a bus 730 that connects the various system components (including the memory unit 720 and the processing unit 710), a display unit 740, and the like.
Wherein the storage unit stores program codes executable by the processing unit 710 to cause the processing unit 710 to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, the processing unit 710 may perform the steps as shown in fig. 2, 3, 4.
The memory unit 720 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 7201 and/or a cache memory unit 7202, and may further include a read only memory unit (ROM) 7203.
The memory unit 720 may also include a program/utility 7204 having a set (at least one) of program modules 7205, such program modules 7205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 730 may be any representation of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 700 may also communicate with one or more external devices 700' (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 700, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 700 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 750. Also, the electronic device 700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 760. The network adapter 760 may communicate with other modules of the electronic device 700 via the bus 730. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, as shown in fig. 8, the technical solution according to the embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, or a network device, etc.) to execute the above method according to the embodiment of the present disclosure.
The software product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The computer readable medium carries one or more programs which, when executed by a device, cause the computer readable medium to perform the functions of: acquiring basic data of a user in a first preset scene; inputting the basic data into a user analysis model to generate a user score, wherein the user analysis model is generated through basic data of other users in a second preset scene and a transfer learning method; and allocating resources for the user based on the user score.
Those skilled in the art will appreciate that the modules described above may be distributed in the apparatus according to the description of the embodiments, or may be modified accordingly in one or more apparatuses unique from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that the present disclosure is not limited to the precise arrangements, instrumentalities, or instrumentalities described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A resource allocation method based on transfer learning is characterized by comprising the following steps:
acquiring basic data of a user in a first preset scene;
inputting the basic data into a user analysis model to generate a user score, wherein the user analysis model is generated through basic data of other users in a second preset scene and a transfer learning method;
performing resource allocation for the user based on the user score;
the generating of the user analysis model through the basic data and the transfer learning method of other users in the second preset scene comprises the following steps:
training basic data of other users in a second preset scene through an extreme gradient boosting decision tree model to generate a plurality of user feature combinations;
training a logistic regression model through the plurality of user feature combinations to generate an initial model;
performing transfer learning on the initial model based on basic data of a plurality of historical users in a first preset scene to generate the user analysis model.
2. The method of claim 1, further comprising:
generating an initial model based on the basic data of other users in a second preset scene;
performing transfer learning on the initial model based on basic data of a plurality of historical users in a first preset scene to generate the user analysis model.
3. The method of claim 2, wherein generating the initial model based on the base data of the other users in the second predetermined scenario comprises:
acquiring basic data of other users in a second preset scene;
generating a plurality of user feature combinations based on the base data of the other users;
training a logistic regression model based on the plurality of user feature combinations to generate the initial model.
4. The method of claim 3, wherein generating a user feature combination based on the base data of the other users comprises:
determining a label of the basic data of the other users;
inputting the base data of the other users with labels into an extreme gradient boosting decision tree model to generate a plurality of user feature combinations.
5. The method of claim 4, wherein inputting the base data of the other users with labels into an extreme gradient boosting decision tree model to generate a plurality of user feature combinations comprises:
inputting the basic data of the other users with the labels into an extreme gradient lifting decision tree model to generate a calculation result;
sequentially numbering leaf nodes in a tree structure in the calculation result;
generating the plurality of user feature combinations based on the number of calculations and the number.
6. The method of claim 3, wherein training a logistic regression model based on the plurality of user feature combinations to generate the initial model comprises:
performing a unique transformation process on the plurality of user feature combinations;
inputting the processed plurality of user feature combinations into a logistic regression model for training;
and generating the initial model after training.
7. The method of claim 2, wherein performing transfer learning on the initial model based on base data of a plurality of historical users in a first preset scenario to generate the user analysis model comprises:
acquiring basic data of a plurality of historical users in a first preset scene;
training the initial model through basic data of the plurality of historical users;
fine-tuning the initial model based on a training result;
and when the convergence function meets a preset condition, generating the user analysis model.
8. A resource allocation apparatus based on transfer learning, comprising:
the data module is used for acquiring basic data of a user in a first preset scene;
the analysis module is used for inputting the basic data into a user analysis model to generate a user score, wherein the user analysis model is generated through basic data of other users in a second preset scene and a transfer learning method;
and the allocation module is used for allocating resources for the user based on the user score.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202011159526.6A 2020-10-27 2020-10-27 Resource allocation method and device based on transfer learning and electronic equipment Pending CN112015562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011159526.6A CN112015562A (en) 2020-10-27 2020-10-27 Resource allocation method and device based on transfer learning and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011159526.6A CN112015562A (en) 2020-10-27 2020-10-27 Resource allocation method and device based on transfer learning and electronic equipment

Publications (1)

Publication Number Publication Date
CN112015562A true CN112015562A (en) 2020-12-01

Family

ID=73527959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011159526.6A Pending CN112015562A (en) 2020-10-27 2020-10-27 Resource allocation method and device based on transfer learning and electronic equipment

Country Status (1)

Country Link
CN (1) CN112015562A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541640A (en) * 2020-12-22 2021-03-23 平安银行股份有限公司 Resource authority management method and device, electronic equipment and computer storage medium
CN112600906A (en) * 2020-12-09 2021-04-02 中国科学院深圳先进技术研究院 Resource allocation method and device for online scene and electronic equipment
CN112650583A (en) * 2020-12-23 2021-04-13 新智数字科技有限公司 Resource allocation method, device, readable medium and electronic equipment
CN112949752A (en) * 2021-03-25 2021-06-11 支付宝(杭州)信息技术有限公司 Training method and device of business prediction system
CN117035004A (en) * 2023-07-24 2023-11-10 北京泰策科技有限公司 Text, picture and video generation method and system based on multi-modal learning technology

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112600906A (en) * 2020-12-09 2021-04-02 中国科学院深圳先进技术研究院 Resource allocation method and device for online scene and electronic equipment
CN112541640A (en) * 2020-12-22 2021-03-23 平安银行股份有限公司 Resource authority management method and device, electronic equipment and computer storage medium
CN112650583A (en) * 2020-12-23 2021-04-13 新智数字科技有限公司 Resource allocation method, device, readable medium and electronic equipment
CN112949752A (en) * 2021-03-25 2021-06-11 支付宝(杭州)信息技术有限公司 Training method and device of business prediction system
CN112949752B (en) * 2021-03-25 2022-09-06 支付宝(杭州)信息技术有限公司 Training method and device of business prediction system
CN117035004A (en) * 2023-07-24 2023-11-10 北京泰策科技有限公司 Text, picture and video generation method and system based on multi-modal learning technology

Similar Documents

Publication Publication Date Title
CN112015562A (en) Resource allocation method and device based on transfer learning and electronic equipment
CN112529702B (en) User credit granting strategy allocation method and device and electronic equipment
CN112508694B (en) Method and device for processing resource limit application by server and electronic equipment
CN112016796B (en) Comprehensive risk score request processing method and device and electronic equipment
CN112348659B (en) User identification policy distribution method and device and electronic equipment
CN113298354B (en) Automatic generation method and device of service derivative index and electronic equipment
CN111145009A (en) Method and device for evaluating risk after user loan and electronic equipment
CN111967543A (en) User resource quota determining method and device and electronic equipment
CN111598494A (en) Resource limit adjusting method and device and electronic equipment
CN111582314A (en) Target user determination method and device and electronic equipment
CN113297287B (en) Automatic user policy deployment method and device and electronic equipment
CN112017062A (en) Resource limit distribution method and device based on guest group subdivision and electronic equipment
CN112016792A (en) User resource quota determining method and device and electronic equipment
CN114742645B (en) User security level identification method and device based on multi-stage time sequence multitask
CN113298555B (en) Promotion strategy generation method and device and electronic equipment
CN114091815A (en) Resource request processing method, device and system and electronic equipment
CN113902545A (en) Resource limit distribution method and device and electronic equipment
CN112348658A (en) Resource allocation method and device and electronic equipment
CN113568739A (en) User resource limit distribution method and device and electronic equipment
CN113902543A (en) Resource quota adjusting method and device and electronic equipment
CN111626438B (en) Model migration-based user policy allocation method and device and electronic equipment
CN113590310A (en) Resource allocation method and device based on rule touch rate scoring and electronic equipment
CN112527852A (en) User dynamic support strategy allocation method and device and electronic equipment
CN112950003A (en) User resource quota adjusting method and device and electronic equipment
CN112508631A (en) User policy distribution method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination