CN113393299A - Recommendation model training method and device, electronic equipment and storage medium - Google Patents

Recommendation model training method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113393299A
CN113393299A CN202110674994.5A CN202110674994A CN113393299A CN 113393299 A CN113393299 A CN 113393299A CN 202110674994 A CN202110674994 A CN 202110674994A CN 113393299 A CN113393299 A CN 113393299A
Authority
CN
China
Prior art keywords
sequence
training data
user
training
user behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110674994.5A
Other languages
Chinese (zh)
Inventor
鲁转丽
周洪菊
李倩
郭志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110674994.5A priority Critical patent/CN113393299A/en
Publication of CN113393299A publication Critical patent/CN113393299A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/08Auctions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Technology Law (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a recommendation model training method which can be used in the fields of artificial intelligence and finance. The method comprises the following steps: acquiring training data, wherein the training data comprises user attributes, a user behavior sequence in a preset time range and transaction attributes; determining a label sequence of the training data based on the user behavior sequence and the transaction attribute; and training the initial model using the training data and the sequence of labels of the training data to obtain a target recommendation model. In addition, the disclosure also provides a recommendation model training device, an electronic device, a readable storage medium and a computer program product.

Description

Recommendation model training method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence and the field of finance, and more particularly, to a recommended model training method, a recommended model training apparatus, an electronic device, a readable storage medium, and a computer program product.
Background
With the rapid development of the internet, online businesses of related organizations are gradually increased, and online marketing activities are also one of the important means for attracting users.
In implementing the disclosed concept, the inventors discovered that content presentation on presentation positions is one of the important ways of content recommendation.
Disclosure of Invention
In view of the above, the present disclosure provides a recommended model training method, a recommended model training apparatus, an electronic device, a readable storage medium, and a computer program product.
One aspect of the present disclosure provides a recommendation model training method, including: acquiring training data, wherein the training data comprises user attributes, a user behavior sequence in a preset time range and transaction attributes; determining a label sequence of the training data based on the user behavior sequence and the transaction attribute; and training an initial model by using the training data and the label sequence of the training data to obtain a target recommendation model.
According to the embodiment of the disclosure, the number of bits of the user behavior sequence is equal to the number of content nodes, and the user behavior sequence corresponds to the content nodes one to one; the method further comprises the following steps: determining the user behavior sequence according to the access condition of the user to each content node within the preset time range; wherein, for each of the content nodes, when the user accesses the content node, it is determined that a value corresponding to the content node in the user behavior list is 1; and determining that the value corresponding to the content node in the user behavior list is 0 when the user does not access the content node.
According to an embodiment of the present disclosure, the determining the label sequence of the training data based on the user behavior sequence and the transaction attribute includes: constructing a first score sequence of the training data based on the user behavior sequence; constructing a second scoring sequence of the training data based on the transaction attributes; carrying out weighted summation on the user behavior sequence, the first score sequence and the second score sequence to obtain a third score sequence of the training data; and processing the third scoring sequence according to a preset rule to obtain a label sequence of the training data.
According to an embodiment of the present disclosure, the content nodes include N preset content nodes, where N is a positive integer; wherein the constructing a first score sequence of the training data based on the user behavior sequence includes: determining the number M of preset content nodes in the content nodes accessed by the user based on the user behavior sequence, wherein the M is a positive integer less than or equal to the N; generating an initial sequence with the length equal to that of the user behavior sequence; and replacing the numerical value in the initial sequence with the ratio of the M to the N to obtain the first scoring sequence.
According to an embodiment of the present disclosure, the transaction attribute includes a transaction value of each content node within the preset time range; wherein the constructing of the second score sequence of the training data based on the transaction attributes includes: determining an intermediate profit for each of the content nodes based on the transaction value for each of the content nodes and the profitability for each of the content nodes; generating an initial sequence with the length equal to that of the user behavior sequence; and replacing the numerical value in the initial sequence by using the ratio of the intermediate profit to the preset profit to obtain the second scoring sequence.
According to an embodiment of the present disclosure, the processing the third score sequence according to a preset rule to obtain a tag sequence of the training data includes: setting the sequence value to 1 for each sequence value of the third scoring sequence when the sequence value is greater than a preset threshold value; setting the sequence value to 0 when the sequence value is less than or equal to the preset threshold value; and traversing all sequence values in the third scoring sequence to obtain the tag sequence.
According to an embodiment of the present disclosure, the training an initial model using the training data and the label sequence of the training data includes: classifying the training data into a positive sample set and a negative sample set according to the tag sequence, wherein the negative sample set comprises the training data of which all numerical values in the tag sequence are 0; adjusting the number of the samples in the positive sample set or the negative sample set so that the number of the samples in the positive sample set and the number of the samples in the negative sample set after adjustment reach a preset ratio; and alternately inputting the samples in the adjusted positive sample set and the samples in the adjusted negative sample set into the initial model to train the initial model.
Another aspect of the disclosure provides a recommendation model training apparatus, which includes an obtaining module, a labeling module, and a training module. The system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring training data, and the training data comprises user attributes, a user behavior sequence in a preset time range and transaction attributes; a labeling module, configured to determine a label sequence of the training data based on the user behavior sequence and the transaction attribute; and the training module is used for training an initial model by using the training data and the label sequence of the training data to obtain a target recommendation model.
Another aspect of the present disclosure provides an electronic device including: one or more processors; memory to store one or more instructions, wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement a method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program product comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the acquired user historical behavior data and the acquired transaction data are used as training data, the training data are labeled according to the user historical behavior data and the transaction data, and then the labeled training data are used for training the recommendation model. By using the data with different dimensions to label the training data, the technical problem that the flow value of a user cannot be optimized in an indirect bidding scene is at least partially solved, the user experience is effectively improved, and the intermediate benefit of a platform provider is guaranteed.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture 100 to which a recommendation model training method may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a recommendation model training method according to an embodiment of the present disclosure;
FIG. 3A schematically illustrates a structural diagram of training data according to an embodiment of the present disclosure;
FIG. 3B schematically shows a structural diagram of a tag sequence of training data according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of an initial model training procedure according to an embodiment of the present disclosure;
FIG. 5 schematically shows a block diagram of a recommendation model training apparatus according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a block diagram of an electronic device suitable for implementing a recommendation model training method in accordance with an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
With the development of the internet, the number of selectable services is increased when a user transacts services online, so that different service providers need to promote the service content online to increase the service volume, and the same exhibition position is seized by different demanders at the same time.
In the related art, a Real Time Bidding (RTB) strategy is often used, after a user exhibition behavior is evaluated, a target service provider is selected from a plurality of service providers, and service content of a service provider with the highest bid among the target service providers is added to an exhibition booth for promotion. The strategy reduces the operation cost of the service provider, improves the income of relevant organizations, and ensures the experience of users.
However, in the case where the service provider is an internal department of a related organization, the bid of the service provider when bidding on the exhibition space is generally equal to 0, and it is not possible to select an optimal service provider by using the real-time bidding strategy.
In view of this, embodiments of the present disclosure provide a training method for a recommendation model for recommending service content, where a user behavior is predicted by using the recommendation model obtained by training, so that a suitable service content can be displayed for a user, user experience is improved, and benefits of a relevant organization are guaranteed.
Specifically, embodiments of the present disclosure provide a recommendation model training method, a recommendation model training apparatus, an electronic device, a readable storage medium, and a computer program product. The method comprises the following steps: acquiring training data, wherein the training data comprises user attributes, a user behavior sequence in a preset time range and transaction attributes; determining a label sequence of the training data based on the user behavior sequence and the transaction attribute; and training the initial model using the training data and the sequence of labels of the training data to obtain a target recommendation model.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
The recommendation model training method and device provided by the embodiment of the disclosure can be used in the field of artificial intelligence or the field of finance, for example, in online banking business, the recommendation model training method and device provided by the embodiment of the disclosure can be used for selecting appropriate contents to be placed in each exhibition position in a program interface so as to maximize the intermediate income of a bank. In addition, the recommendation model training method and apparatus provided by the embodiment of the disclosure can also be used in any other fields except the artificial intelligence field and the financial field, and the application field of the embodiment of the disclosure is not limited.
FIG. 1 schematically illustrates an exemplary system architecture 100 to which a recommendation model training method may be applied, according to an embodiment of the disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a shopping application, a web browser application, a search application, an instant messaging tool, a mailbox client, and/or social platform software.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services, such as a background management server that provides support for data processing requests sent by users using the terminal devices 101, 102, 103. The background management server can extract data to be processed from the corresponding database based on the received data processing request, process the data and feed back the processing result to the terminal equipment.
It should be noted that the recommendation model training method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the recommendation model training apparatus provided by the embodiments of the present disclosure may be generally disposed in the server 105. The recommendation model training method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the recommendation model training apparatus provided in the embodiments of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the recommendation model training method provided by the embodiment of the present disclosure may also be executed by the terminal device 101, 102, or 103, or may also be executed by another terminal device different from the terminal device 101, 102, or 103. Accordingly, the recommendation model training apparatus provided in the embodiment of the present disclosure may also be disposed in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
For example, the user log generated after the user session is ended is typically stored in a database, which may be accessed by the server 105 or any of the terminal devices 101, 102, 103 via a network link. Any one of the server 105 or the terminal devices 101, 102, and 103 may execute the recommendation model training method provided in the embodiment of the present disclosure locally after acquiring the training data from the database, or may transmit the training data to another terminal device, a server, or a server cluster, and execute the recommendation model training method provided in the embodiment of the present disclosure by another terminal device, a server, or a server cluster that receives the training data.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
FIG. 2 schematically shows a flow chart of a recommendation model training method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S230.
It should be noted that, unless explicitly stated that there is an execution sequence between different operations or there is an execution sequence between different operations in technical implementation, the execution sequence between multiple operations may not be sequential, or multiple operations may be executed simultaneously in the flowchart in this disclosure.
In operation S210, training data is acquired, wherein the training data includes user attributes, a sequence of user behaviors within a preset time range, and transaction attributes.
In operation S220, a tag sequence of training data is determined based on the user behavior sequence and the transaction attributes.
In operation S230, the initial model is trained using the training data and the tag sequence of the training data to obtain a target recommendation model.
According to the embodiment of the disclosure, the training data can be extracted from the user logs, and based on the consideration of improving the reliability of the model, the training data can be extracted from the user logs of a plurality of users in the last week.
According to an embodiment of the present disclosure, the user attribute may include related information pre-stored at the time of user registration.
According to an embodiment of the present disclosure, the preset time range may include a period of time after the user opens the session and accepts the content presentation, for example, the preset time range may be one hour after the user accepts the relevant advertisement presentation. In the case where the session duration of the user is less than the preset time range, the preset time range may be set as the session duration of the user. And under the condition that the session duration of the user is greater than the preset time range, segmenting the session, and acquiring a plurality of pieces of training data from the user log of the session.
According to the embodiment of the disclosure, the user behavior sequence may be an array, each element in the array corresponds to a specific service content, and a change of each element in the array may represent an access condition of a user to the specific service.
According to an embodiment of the present disclosure, the transaction attribute may be an array, each element in the array corresponds to a specific service content, and a value of each element in the array may represent a transaction value of the user under the specific service.
According to the embodiment of the present disclosure, the label sequence of the training data may be an array consisting of "0" and "1", each element in the array corresponds to a specific service content, and the value of each element in the array may be a comprehensive evaluation value based on the actual selection condition and the intermediate profit of the user.
According to an embodiment of the present disclosure, the initial model may adopt a structure of an existing model such as a click rate model (deep fm).
According to the embodiment of the disclosure, loss functions such as Log loss, L2 loss and the like can be used in training the initial model, and the iteration of the model can be performed by a random gradient descent method and the like.
According to the embodiment of the disclosure, the acquired user historical behavior data and the acquired transaction data are used as training data, the training data are labeled according to the user historical behavior data and the transaction data, and then the labeled training data are used for training the recommendation model. By using the data with different dimensions to label the training data, the technical problem that the flow value of a user cannot be optimized in an indirect bidding scene is at least partially solved, the user experience is effectively improved, and the intermediate benefit of a platform provider is guaranteed.
The method illustrated in FIG. 2 is further described with reference to FIGS. 3A, 3B, and 4 in conjunction with specific embodiments.
Fig. 3A schematically illustrates a structural diagram of training data according to an embodiment of the present disclosure.
As shown in FIG. 3A, training data 300 is comprised of user attributes 310, user behavior sequences 320, and transaction attributes 330.
According to an embodiment of the present disclosure, the meaning of the specific field in the user attribute 310 may include the age, asset, gender, academic calendar, and the like of the user, and partial information in the user attribute information is abstracted into a discretized numerical value, that is, the user attribute in the training data used as the training model may be obtained.
For example, for the attribute of age, instead of using the real age as the training data, a list of age groups may be constructed in advance, for example, the list may be arranged according to 0 to 18, 19 to 24, 25 to 29, 30 to 34, and the ordinal number of the age group to which the real age belongs may be selected as a specific numerical value; for example, the real age is 0-18 years, and the selected specific numerical value is 1; the real age is 25-29, and the specific value is 3.
For another example, for the property of the asset, the user's log value of the actual asset may be used as the training data, and the log value may be rounded up to be the actual value.
According to an embodiment of the present disclosure, each element in the user behavior sequence 320 may represent whether the user accessed the corresponding content node within a preset time range.
For example, in an application program of a bank, a plurality of content nodes such as financial, insurance and credit cards can be embedded in advance, and after the content nodes are accessed, the corresponding business page can be skipped; the user behavior sequence 320 may be an array containing only the elements "0" and "1", and the user behavior sequence 320 may correspond to a plurality of content nodes one to one. After the user starts the session of the application program, the full-link tracing can be performed on the behavior path of the user to obtain the access condition of the user to each content node within a preset time range, and under the condition that the user is determined to access the content node, the value corresponding to the content node in the user behavior sequence 320 can be set to be 1; in the event that it is determined that the user does not access the content node, the value corresponding to the content node in the user behavior sequence 320 may be set to 0.
According to an embodiment of the present disclosure, each element in the transaction attribute 330 may represent a transaction value of the user in the service page corresponding to the content node.
For example, the transaction attribute 330 may also be an array, which is in one-to-one correspondence with a plurality of content nodes; and for the node with the transaction value, taking the logarithm of the transaction value, and rounding the logarithm to obtain the value of the corresponding element in the array.
Fig. 3B schematically shows a structural diagram of a tag sequence of training data according to an embodiment of the present disclosure.
As shown in fig. 3B, the tag sequence 340 may be calculated from the user behavior sequence 320, the first scoring sequence 341, and the second scoring sequence 342.
According to an embodiment of the present disclosure, the first scoring sequence 341 may be constructed according to a proportion of preset content nodes among content nodes accessed by a user.
For example, the content nodes of the banking application may include content nodes that do not relate to specific services, such as "login", "registration", "query", and the like, and may also include N preset content nodes that relate to specific services, such as "finance", "insurance", "credit card", and the like; in a session, based on the user behavior sequence 320 of the session, the number M of preset content nodes in the content nodes accessed by the user may be determined, and then the number M may be
Figure BDA0003118983910000101
As the first score value, an array is constructed according to the first score value, and the constructed array is the first score sequence 341.
In some embodiments, the preset content nodes may be arbitrarily added or deleted.
For example, in a banking application program, originally set preset content nodes may be 3 nodes of "finance and financing," "insurance," and "credit card," and according to the needs of subsequent services, "deposits" in other content nodes may be added to the preset content nodes, so as to obtain new preset content nodes.
According to embodiments of the present disclosure, the second scoring sequence 342 may be constructed from the transaction values.
For example, in an application program of a bank, if a transaction value of a specific service corresponding to a content node by a user is α, and a profit rate of the specific service in the application program is b, the transaction value can be an intermediate profit brought by the bank by a corresponding value of a × b; then, the intermediate income can be converted into a numerical value between 0 and 1 according to a preset income value, and the numerical value is used as a second score value; finally, an array is constructed according to the second score value, and the constructed array is the second score sequence 341.
According to embodiments of the present disclosure, a weight w may be utilized1、w2And w3The third scoring sequence 343 may be obtained by weighted summation of the user behavior sequence 320, the first scoring sequence 341, and the second scoring sequence 342.
According to the embodiment of the present disclosure, when weight selection is performed, setting may be performed according to the constraint condition as shown in formula (1).
Figure BDA0003118983910000111
According to an embodiment of the present disclosure, each sequence value in third scoring sequence 343 may be rounded to obtain tag sequence 340.
For example, for each sequence value of the third scoring sequence, the sequence value may be set to 1 if the sequence value is greater than a preset threshold; and setting the sequence value to be 0 under the condition that the sequence value is less than or equal to a preset threshold value.
According to the embodiment of the disclosure, the preset threshold value can be any number between 0.5 and 1, and the closer the value of the preset threshold value is to 1, the larger the influence of the intermediate yield on the recommendation model can be considered.
FIG. 4 schematically shows a schematic diagram of an initial model training procedure according to an embodiment of the disclosure.
As shown in FIG. 4, the initial model training procedure includes operations S401 to S405.
In operation S401, training data is classified into a positive sample set and a negative sample set.
In operation S402, it is determined whether the number of samples in the positive sample set and the negative sample set reaches a preset ratio. In the case where the determination result is no, operation S403 is performed; in a case where the determination result is yes, operation S404 is performed.
In operation S403, the number of samples in the positive sample set or the negative sample set is adjusted.
In operation S404, positive and negative samples are alternately input to the initial model.
In operation S405, it is determined whether the training of the model is completed. Under the condition that the model is determined to be fully trained, obtaining a target recommendation model and finishing training; in the event that it is determined that the model is not trained, the method continues with operation S404, with the input sample being trained.
According to the embodiment of the present disclosure, the training data may be classified according to the tag sequence, for example, the training data with values of 0 in the tag sequence may be added as a negative sample into the negative sample set.
According to the embodiment of the disclosure, the preset ratio may be set to 1, so that the positive samples and the negative samples are alternately input into the initial model in a 1: 1 manner for training.
According to the embodiment of the disclosure, during initial model training, super parameters such as training rounds, data volume of each batch of training data, loss value threshold and the like can be set for controlling the process of model training; it may be set that the model is considered to be trained in the case that the current training round reaches the set training round or the loss value is less than the loss value threshold.
According to the embodiment of the disclosure, the robustness of the model obtained by training can be effectively improved by adopting the method of alternately inputting the positive sample and the negative sample into the training.
FIG. 5 schematically shows a block diagram of a recommendation model training apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the recommendation model training apparatus includes an obtaining module 510, a labeling module 520, and a training module 530.
An obtaining module 510, configured to obtain training data, where the training data includes a user attribute, a user behavior sequence within a preset time range, and a transaction attribute.
And the labeling module 520 is used for determining the label sequence of the training data based on the user behavior sequence and the transaction attribute.
A training module 530, configured to train the initial model using the training data and the label sequence of the training data to obtain a target recommendation model.
According to the embodiment of the disclosure, the acquired user historical behavior data and the acquired transaction data are used as training data, the training data are labeled according to the user historical behavior data and the transaction data, and then the labeled training data are used for training the recommendation model. By using the data with different dimensions to label the training data, the technical problem that the flow value of a user cannot be optimized in an indirect bidding scene is at least partially solved, the user experience is effectively improved, and the intermediate benefit of a platform provider is guaranteed.
According to the embodiment of the disclosure, the number of bits of the user behavior sequence is equal to the number of content nodes, and the user behavior sequence corresponds to the content nodes one to one.
According to an embodiment of the present disclosure, the recommendation model training apparatus further includes a determination module.
And the determining module is used for determining the user behavior sequence according to the access condition of the user to each content node within the preset time range. For each content node, under the condition that a user accesses the content node, determining that the numerical value corresponding to the content node in the user behavior list is 1; and under the condition that the user does not access the content node, determining that the numerical value corresponding to the content node in the user behavior list is 0.
According to an embodiment of the present disclosure, the labeling module 520 includes a first labeling unit, a second labeling unit, a third labeling unit, and a fourth labeling unit.
And the first labeling unit is used for constructing a first score sequence of the training data based on the user behavior sequence.
And the second labeling unit is used for constructing a second score sequence of the training data based on the transaction attributes.
And the third labeling unit is used for performing weighted summation on the user behavior sequence, the first score sequence and the second score sequence to obtain a third score sequence of the training data.
And the fourth labeling unit is used for processing the third scoring sequence according to a preset rule to obtain a label sequence of the training data.
According to the embodiment of the disclosure, the content nodes include N preset content nodes, where N is a positive integer.
According to an embodiment of the present disclosure, the first labeling unit includes a first labeling subunit, a second labeling subunit, and a third labeling subunit.
The first labeling subunit is used for determining the number M of preset content nodes in the content nodes accessed by the user based on the user behavior sequence, wherein M is a positive integer less than or equal to N.
And the second labeling subunit is used for generating an initial sequence with the length equal to that of the user behavior sequence.
And the third labeling subunit is used for replacing the numerical values in the initial sequence with the ratio of M to N to obtain the first scoring sequence.
According to an embodiment of the present disclosure, the transaction attribute includes a transaction value of the user at each content node within a preset time range.
According to an embodiment of the present disclosure, the second labeling unit includes a fourth labeling sub-unit, a fifth labeling sub-unit, and a sixth labeling sub-unit.
And the fourth labeling subunit is used for determining the intermediate profit of each content node based on the transaction value of each content node and the profit rate of each content node.
And the fifth labeling subunit is used for generating an initial sequence with the length equal to that of the user behavior sequence.
And the sixth labeling subunit is used for replacing the numerical values in the initial sequence with the ratio of the intermediate profit to the preset profit to obtain a second scoring sequence.
According to an embodiment of the present disclosure, the fourth labeling unit includes a seventh labeling sub-unit and an eighth labeling sub-unit.
A seventh labeling subunit, configured to set, for each sequence value of the third scoring sequence, the sequence value to 1 when the sequence value is greater than a preset threshold; and setting the sequence value to be 0 under the condition that the sequence value is less than or equal to the preset threshold value.
And the eighth labeling subunit is used for traversing all the sequence values in the third scoring sequence to obtain a tag sequence.
According to an embodiment of the present disclosure, the training module 530 includes a first training unit, a second training unit, and a third training unit.
The training device comprises a first training unit and a second training unit, wherein the first training unit is used for classifying the training data into a positive sample set and a negative sample set according to the label sequence, and the negative sample set comprises the training data of which all numerical values in the label sequence are 0.
And the second training unit is used for adjusting the number of the samples in the positive sample set or the negative sample set so as to enable the number of the samples in the positive sample set after adjustment and the number of the samples in the negative sample set after adjustment to reach a preset proportion.
And the third training unit is used for inputting the samples in the adjusted positive sample set and the samples in the adjusted negative sample set into the initial model alternately so as to train the initial model.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the obtaining module 510, the labeling module 520, and the training module 530 may be combined and implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the obtaining module 510, the labeling module 520, and the training module 530 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware. Alternatively, at least one of the obtaining module 510, the labeling module 520 and the training module 530 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
It should be noted that, in the embodiment of the present disclosure, the recommendation model training device part corresponds to the recommendation model training method part in the embodiment of the present disclosure, and the description of the recommendation model training device part specifically refers to the recommendation model training method part, which is not described herein again.
FIG. 6 schematically illustrates a block diagram of an electronic device suitable for implementing a recommendation model training method in accordance with an embodiment of the present disclosure. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, a computer electronic device 600 according to an embodiment of the present disclosure includes a processor 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. Processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 601 may also include onboard memory for caching purposes. Processor 601 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are stored. The processor 601, the ROM602, and the RAM 603 are connected to each other via a bus 604. The processor 601 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM602 and/or RAM 603. It is to be noted that the programs may also be stored in one or more memories other than the ROM602 and RAM 603. The processor 601 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 600 may also include input/output (I/O) interface 605, input/output (I/O) interface 605 also connected to bus 604, according to an embodiment of the disclosure. The electronic device 600 may also include one or more of the following components connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program, when executed by the processor 601, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM602 and/or RAM 603 described above and/or one or more memories other than the ROM602 and RAM 603.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method provided by the embodiments of the present disclosure, when the computer program product is run on an electronic device, the program code being adapted to cause the electronic device to carry out the recommendation model training method provided by the embodiments of the present disclosure.
The computer program, when executed by the processor 601, performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of a signal on a network medium, downloaded and installed through the communication section 609, and/or installed from the removable medium 611. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (11)

1. A recommendation model training method, comprising:
acquiring training data, wherein the training data comprises user attributes, a user behavior sequence in a preset time range and transaction attributes;
determining a sequence of labels for the training data based on the sequence of user behaviors and the transaction attributes; and
training an initial model using the training data and the sequence of labels of the training data to obtain a target recommendation model.
2. The method of claim 1, wherein the number of bits of the user behavior sequence is equal to the number of content nodes, and the user behavior sequence corresponds one-to-one to the content nodes;
the method further comprises the following steps:
determining the user behavior sequence according to the access condition of the user to each content node within the preset time range;
for each content node, determining that the numerical value corresponding to the content node in the user behavior list is 1 under the condition that the user accesses the content node; and under the condition that the user does not access the content node, determining that the numerical value corresponding to the content node in the user behavior list is 0.
3. The method of claim 2, wherein the determining the sequence of labels for the training data based on the sequence of user behaviors and the transaction attributes comprises:
constructing a first score sequence of the training data based on the user behavior sequence;
constructing a second sequence of scores for the training data based on the transaction attributes;
carrying out weighted summation on the user behavior sequence, the first score sequence and the second score sequence to obtain a third score sequence of the training data; and
and processing the third scoring sequence according to a preset rule to obtain a label sequence of the training data.
4. The method of claim 3, wherein the content nodes comprise N preset content nodes, wherein N is a positive integer;
wherein the constructing a first score sequence of the training data based on the user behavior sequence comprises:
determining the number M of preset content nodes in the content nodes accessed by the user based on the user behavior sequence, wherein M is a positive integer less than or equal to N;
generating an initial sequence with the length equal to that of the user behavior sequence; and
and replacing the numerical values in the initial sequence by using the ratio of the M to the N to obtain the first scoring sequence.
5. The method of claim 3, wherein the transaction attributes include transaction values of the user at each of the content nodes within the preset time range;
wherein said constructing a second sequence of scores for the training data based on the transaction attributes comprises:
determining an intermediate profit for each of the content nodes based on the transaction value for each of the content nodes and the profitability for each of the content nodes;
generating an initial sequence with the length equal to that of the user behavior sequence; and
and replacing the numerical value in the initial sequence by using the ratio of the intermediate profit to the preset profit to obtain the second scoring sequence.
6. The method of claim 3, wherein the processing the third scoring sequence according to a preset rule to obtain a tag sequence of the training data comprises:
for each sequence value of the third sequence of scores, setting the sequence value to 1 if the sequence value is greater than a preset threshold; setting the sequence value to 0 if the sequence value is less than or equal to the preset threshold; and
and traversing all sequence values in the third scoring sequence to obtain the tag sequence.
7. The method of claim 1, wherein the training an initial model using the training data and the sequence of labels for the training data comprises:
classifying the training data into a positive sample set and a negative sample set according to the label sequence, wherein the negative sample set comprises the training data of which all numerical values in the label sequence are 0;
adjusting the number of the samples in the positive sample set or the negative sample set so that the number of the samples in the positive sample set and the number of the samples in the negative sample set after adjustment reach a preset ratio; and
and alternately inputting the samples in the adjusted positive sample set and the samples in the adjusted negative sample set into the initial model so as to train the initial model.
8. A recommendation model training apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring training data, and the training data comprises user attributes, a user behavior sequence in a preset time range and transaction attributes;
the marking module is used for determining a label sequence of the training data based on the user behavior sequence and the transaction attribute; and
and the training module is used for training an initial model by using the training data and the label sequence of the training data to obtain a target recommendation model.
9. An electronic device, comprising:
one or more processors;
a memory to store one or more instructions that,
wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 7.
11. A computer program product comprising computer executable instructions for implementing the method of any one of claims 1 to 7 when executed.
CN202110674994.5A 2021-06-17 2021-06-17 Recommendation model training method and device, electronic equipment and storage medium Pending CN113393299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110674994.5A CN113393299A (en) 2021-06-17 2021-06-17 Recommendation model training method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110674994.5A CN113393299A (en) 2021-06-17 2021-06-17 Recommendation model training method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113393299A true CN113393299A (en) 2021-09-14

Family

ID=77621740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110674994.5A Pending CN113393299A (en) 2021-06-17 2021-06-17 Recommendation model training method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113393299A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579850A (en) * 2022-02-25 2022-06-03 北京百度网讯科技有限公司 Model training method, data recommendation device, electronic equipment and storage medium
CN114708109A (en) * 2022-03-01 2022-07-05 上海钐昆网络科技有限公司 Risk recognition model training method, device, equipment and storage medium
WO2024060587A1 (en) * 2022-09-19 2024-03-28 北京沃东天骏信息技术有限公司 Generation method for self-supervised learning model and generation method for conversion rate estimation model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579850A (en) * 2022-02-25 2022-06-03 北京百度网讯科技有限公司 Model training method, data recommendation device, electronic equipment and storage medium
CN114708109A (en) * 2022-03-01 2022-07-05 上海钐昆网络科技有限公司 Risk recognition model training method, device, equipment and storage medium
CN114708109B (en) * 2022-03-01 2022-11-11 上海钐昆网络科技有限公司 Risk recognition model training method, device, equipment and storage medium
WO2024060587A1 (en) * 2022-09-19 2024-03-28 北京沃东天骏信息技术有限公司 Generation method for self-supervised learning model and generation method for conversion rate estimation model

Similar Documents

Publication Publication Date Title
US20240028658A1 (en) Systems, apparatuses, and methods for providing a quality score based recommendation
US9064212B2 (en) Automatic event categorization for event ticket network systems
US20170178199A1 (en) Method and system for adaptively providing personalized marketing experiences to potential customers and users of a tax return preparation system
CN110334289B (en) Travel destination determining method and target user determining method
CN113393299A (en) Recommendation model training method and device, electronic equipment and storage medium
US20150206248A1 (en) Apparatus and method for supplying optimized insurance quotes
US20200342500A1 (en) Systems and methods for self-serve marketing pages with multi-armed bandit
CN111095330B (en) Machine learning method and system for predicting online user interactions
US20120253923A1 (en) Systems and methods for providing targeted marketing campaign to merchant
US20170330231A1 (en) Method and system to display targeted ads based on ranking output of transactions
US20180285748A1 (en) Performance metric prediction for delivery of electronic media content items
CN110717597A (en) Method and device for acquiring time sequence characteristics by using machine learning model
CN110880082A (en) Service evaluation method, device, system, electronic equipment and readable storage medium
CN113014558B (en) Message identification method, device, computer system and readable storage medium
US20140244405A1 (en) Automatic Generation of Digital Advertisements
US11257108B2 (en) Systems and methods for dynamic product offerings
EP2991019A1 (en) Real-time financial system advertisement sharing system
US10628806B2 (en) System and method for test data provisioning
CN115795345A (en) Information processing method, device, equipment and storage medium
US11645664B2 (en) Dynamic web content insertion
CN113010798A (en) Information recommendation method, information recommendation device, electronic equipment and readable storage medium
CN113391988A (en) Method and device for losing user retention, electronic equipment and storage medium
CN114117227A (en) Operation maintenance method, system, electronic equipment and storage medium
CN113128773A (en) Training method of address prediction model, address prediction method and device
CN113094595A (en) Object recognition method, device, computer system and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination