CN113962292A - Information processing method and device, storage medium and electronic equipment - Google Patents

Information processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113962292A
CN113962292A CN202111152863.7A CN202111152863A CN113962292A CN 113962292 A CN113962292 A CN 113962292A CN 202111152863 A CN202111152863 A CN 202111152863A CN 113962292 A CN113962292 A CN 113962292A
Authority
CN
China
Prior art keywords
vector
attention
target
operation data
target product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111152863.7A
Other languages
Chinese (zh)
Inventor
蔡林佑
李爽
谢乾龙
王兴星
王栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202111152863.7A priority Critical patent/CN113962292A/en
Publication of CN113962292A publication Critical patent/CN113962292A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to an information processing method, an information processing device, a storage medium and an electronic device, relating to the technical field of electronic information processing, wherein the method comprises the following steps: the method comprises the steps of obtaining an operation information sequence corresponding to a target object, dividing the operation information sequence into a plurality of operation data sets, obtaining a target product vector for representing a target product, wherein each operation data set comprises a specified number of operation data, obtaining an operation vector set corresponding to each operation data set, wherein each operation vector set comprises an operation vector for representing each operation data in the corresponding operation data set, determining an aggregation vector corresponding to each operation vector set according to the plurality of operation vector sets by using a pre-trained capsule network, determining a recognition result according to the plurality of aggregation vectors and the target product vector by using a pre-trained attention network, and the recognition result is used for indicating the feedback of the target object to the target product. The method and the device can make full use of the information carried by the operation information sequence, and improve the speed and accuracy of identification.

Description

Information processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of electronic information processing technologies, and in particular, to an information processing method and apparatus, a storage medium, and an electronic device.
Background
In the related technical field, with the continuous development of the e-commerce technology and the continuous improvement of the supporting service, the shopping behaviors and habits of people in daily life have changed greatly, and purchasing products through e-commerce enables users to have more choices, and meanwhile, the whole shopping process is more convenient. However, the variety and brand of the product are various, and it is often difficult for the user to select. In order to improve the efficiency of information and avoid waste of processing resources and bandwidth resources, historical operation information can be collected and analyzed to identify products meeting specific requirements for display to users. In general, the data size of the historical operation information is large, and if the historical operation information is directly identified, the calculation amount is too large to be realized. If the historical operation information is screened in the modes of truncation, classification and the like, part of the historical operation information is lost, and the identification accuracy is reduced.
Disclosure of Invention
The present disclosure is directed to a method, an apparatus, a storage medium, and an electronic device for processing information, which are used to solve the related problems in the prior art.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for processing information, the method including:
acquiring an operation information sequence corresponding to a target object, and dividing the operation information sequence into a plurality of operation data sets, wherein each operation data set comprises a specified number of operation data;
obtaining a target product vector for representing a target product, and obtaining an operation vector set corresponding to each operation data set, wherein the operation vector set comprises an operation vector for representing each operation data in the corresponding operation data set;
determining an aggregation vector corresponding to each operation vector set according to a plurality of operation vector sets by utilizing a pre-trained capsule network;
and determining a recognition result according to the aggregation vectors and the target product vector by utilizing a pre-trained attention network, wherein the recognition result is used for indicating the feedback of the target object to the target product.
Optionally, the obtaining a target product vector for characterizing a target product includes:
inputting the product information of the target product into a pre-trained vector generator to obtain the target product vector output by the vector generator;
the obtaining of the operation vector set corresponding to each operation data set includes:
for each operation data set, determining an operation vector for characterizing each operation data in the operation data set by using a pre-established knowledge graph; the knowledge graph is used to characterize associations between various operational data.
Optionally, the determining, by using a pre-trained capsule network, an aggregation vector corresponding to each operation vector set according to a plurality of operation vector sets includes:
inputting a plurality of operation vector sets into the capsule network, so that the capsule network aggregates each operation vector set to obtain the aggregation vector corresponding to each operation data set, wherein the aggregation vector is used for representing the association relationship among the operation data included in the corresponding operation data set.
Optionally, the attention network comprises: a plurality of self-attentional layers, bi-directional attentional pooling layers and output layers; determining, by using a pre-trained attention network, a recognition result according to the plurality of aggregation vectors and the target product vector, including:
inputting a plurality of the aggregation vectors into a plurality of the self-attention layers in a fully-connected mode to obtain an intermediate vector output by each self-attention layer;
inputting the target product vector and each intermediate vector output from the attention layer into the bidirectional attention pooling layer to obtain a target vector output by the bidirectional attention pooling layer, wherein the dimension of the intermediate vector is the same as that of the target product vector;
and inputting the target vector into the output layer to obtain the identification result output by the output layer.
Optionally, the inputting the plurality of aggregation vectors into the plurality of self-attention layers in a fully-connected manner to obtain an intermediate vector output by each self-attention layer includes:
according to the number of the operation data sets, determining a target number of self-attention layers in a plurality of self-attention layers, wherein the target number is smaller than the number of the operation data sets;
inputting a plurality of aggregation vectors into a target number of target self-attention layers in a full-connection mode to obtain the intermediate vector output by each target self-attention layer;
the inputting the target product vector and each intermediate vector output from the attention layer into the bidirectional attention pooling layer to obtain a target vector output from the bidirectional attention pooling layer includes:
and inputting the target product vector and the intermediate vector output by each target self-attention layer into the bidirectional attention pooling layer, so as to perform weighted summation on the target product vector and the intermediate vectors according to attention weights by using the bidirectional attention pooling layer, thereby obtaining the target vector.
Optionally, the method further comprises:
and if the identification result indicates that the target object is positively fed back to the target product, sending multimedia information for displaying the target product to the target object.
Optionally, the capsule network and the attention network are obtained by joint training as follows:
obtaining a sample input set and a sample output set, the sample input set comprising: a plurality of sample inputs, each of the sample inputs including a sample product and a sequence of operational information for a sample object, a sample output corresponding to each of the sample inputs being included in the set of sample outputs, each of the sample outputs including feedback for the sample product by the corresponding sample object;
using the set of sample inputs as inputs to the capsule network, using outputs of the capsule network and sample product vectors characterizing the sample products as inputs to the attention network, and using the set of sample outputs as outputs of the attention network, to jointly train the capsule network and the attention network.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for processing information, the apparatus including:
the system comprises a sequence acquisition module, a data processing module and a data processing module, wherein the sequence acquisition module is used for acquiring an operation information sequence corresponding to a target object and dividing the operation information sequence into a plurality of operation data sets, and each operation data set comprises a specified number of operation data;
the vector acquisition module is used for acquiring a target product vector for representing a target product and acquiring an operation vector set corresponding to each operation data set, wherein the operation vector set comprises an operation vector for representing each operation data in the corresponding operation data set;
the first processing module is used for determining an aggregation vector corresponding to each operation vector set according to a plurality of operation vector sets by utilizing a pre-trained capsule network;
and the second processing module is used for determining a recognition result according to the aggregation vectors and the target product vector by utilizing a pre-trained attention network, wherein the recognition result is used for indicating the feedback of the target object to the target product.
Optionally, the vector obtaining module includes:
the first obtaining submodule is used for inputting the product information of the target product into a vector generator trained in advance so as to obtain the target product vector output by the vector generator;
a second obtaining submodule, configured to determine, for each operation data set, an operation vector for characterizing each operation data in the operation data set by using a pre-established knowledge graph; the knowledge graph is used to characterize associations between various operational data.
Optionally, the first processing module is configured to:
inputting a plurality of operation vector sets into the capsule network, so that the capsule network aggregates each operation vector set to obtain the aggregation vector corresponding to each operation data set, wherein the aggregation vector is used for representing the association relationship among the operation data included in the corresponding operation data set.
Optionally, the attention network comprises: a plurality of self-attentional layers, bi-directional attentional pooling layers and output layers; the second processing module comprises:
the input submodule is used for inputting the plurality of aggregation vectors into the plurality of self-attention layers in a full-connection mode so as to obtain an intermediate vector output by each self-attention layer;
the pooling sub-module is used for inputting the target product vector and each intermediate vector output from the attention layer into the bidirectional attention pooling layer to obtain a target vector output by the bidirectional attention pooling layer, and the dimension of the intermediate vector is the same as that of the target product vector;
and the output submodule is used for inputting the target vector into the output layer so as to obtain the identification result output by the output layer.
Optionally, the input submodule is configured to:
according to the number of the operation data sets, determining a target number of self-attention layers in a plurality of self-attention layers, wherein the target number is smaller than the number of the operation data sets;
inputting a plurality of aggregation vectors into a target number of target self-attention layers in a full-connection mode to obtain the intermediate vector output by each target self-attention layer;
the pooling sub-module is to:
and inputting the target product vector and the intermediate vector output by each target self-attention layer into the bidirectional attention pooling layer, so as to perform weighted summation on the target product vector and the intermediate vectors according to attention weights by using the bidirectional attention pooling layer, thereby obtaining the target vector.
Optionally, the apparatus further comprises:
and the sending module is used for sending multimedia information for displaying the target product to the target object if the identification result indicates that the target object is positively fed back to the target product.
Optionally, the capsule network and the attention network are obtained by joint training as follows:
obtaining a sample input set and a sample output set, the sample input set comprising: a plurality of sample inputs, each of the sample inputs including a sample product and a sequence of operational information for a sample object, a sample output corresponding to each of the sample inputs being included in the set of sample outputs, each of the sample outputs including feedback for the sample product by the corresponding sample object;
using the set of sample inputs as inputs to the capsule network, using outputs of the capsule network and sample product vectors characterizing the sample products as inputs to the attention network, and using the set of sample outputs as outputs of the attention network, to jointly train the capsule network and the attention network.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect.
Through the technical scheme, the method and the device for processing the operation data of the target object firstly acquire the operation information sequence corresponding to the target object, and divide the operation information sequence into a plurality of operation data sets, wherein each operation data set comprises a specified number of operation data. And then, obtaining a target product vector for representing the target product and an operation vector set corresponding to each operation data set. And finally, determining an identification result for indicating the feedback of the target object to the target product according to the plurality of aggregation vectors and the target product vector by using the pre-trained attention network. According to the method and the device, the operation information sequence is aggregated through the capsule network, the aggregated result and the target product are identified through the attention network, so that the feedback of the target object to the target product is determined, the information carried by the operation information sequence can be fully utilized, and the identification speed and accuracy are improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow diagram illustrating a method of processing information in accordance with an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating the connectivity of a capsule network and an attention network, according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating another method of processing information in accordance with an exemplary embodiment;
FIG. 4 is a flow diagram illustrating another method of processing information in accordance with an illustrative embodiment;
FIG. 5 is a flow chart illustrating another method of processing information in accordance with an illustrative embodiment;
FIG. 6 is a flow chart of a method of jointly training a capsule network and an attention network;
FIG. 7 is a block diagram illustrating an apparatus for processing information in accordance with an exemplary embodiment;
FIG. 8 is a block diagram illustrating another information processing apparatus according to an example embodiment;
FIG. 9 is a block diagram illustrating another information processing apparatus according to an example embodiment;
FIG. 10 is a block diagram illustrating another information processing apparatus according to an example embodiment;
FIG. 11 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment;
FIG. 12 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flow chart illustrating a method of processing information according to an exemplary embodiment, which may include the steps of, as shown in fig. 1:
step 101, obtaining an operation information sequence corresponding to a target object, and dividing the operation information sequence into a plurality of operation data sets, wherein each operation data set comprises a specified number of operation data.
For example, to predict the feedback of the target object to the target product, the operation information sequence corresponding to the target object may be obtained first. The feedback made by the target object to the target product can be divided into two types: positive feedback and negative feedback, wherein the positive feedback indicates that the target object performs a preset operation on the target product, and the preset operation may be, for example: clicking, collecting, sharing, purchasing and the like, and negative feedback indicates that the target object cannot execute preset operation on the target product. When the preset operation is a Click, the embodiment can be understood as a prediction of CTR (english: Click-Through-Rate, chinese: Click Through Rate). The target object can be understood as a putting platform for putting multimedia information for displaying the target product, and a user can execute preset operation on the target product through the putting platform. The delivery platform may be, for example, one application program, a group of application programs corresponding to the same server, one page in the application program, or the like. The target object can also be understood as a release area, and users in the release area can execute preset operation on the target product. The delivery area may be, for example, an area covered by a local area network, an area covered by a base station, an area provided by an operator for providing services, and the like. The target object can also be understood as a terminal device, and a user can execute preset operation on the target product through the terminal device. The target object may also be understood as a user, and the present disclosure is not particularly limited to the specific meaning of the target object.
The operation information sequence corresponding to the target object may be understood as a set of operation data executed by the target object within a period of time (e.g., 7 days, 1 month, 6 months, or 12 months), where the operation information sequence may include a large amount of operation data arranged in a time sequence, each operation data is used to describe an operation executed by the target object at a time, and the operation data may include multiple dimensions, and may include: product name, product ID, product category ID, operation type, operation time interval, purchase quantity, and the like. Since the number of operation data included in the operation information sequence is large, the operation information sequence may be first divided into a plurality of operation data sets in time order, where each operation data set includes a specified number of operation data. For example, the operation information sequence includes 10000 operation data, and the operation information sequence may be divided into 500 operation data sets according to a time sequence, where each operation data set includes 20 operation data. It should be noted that, in a scenario in which the target object is a user, the operation data included in the operation information sequence is obtained under the condition that the user is authorized, or is actively submitted after the user reads the relevant description, or is data that the terminal device must send to the server when the user uses the terminal device.
Step 102, obtaining a target product vector for characterizing a target product, and obtaining an operation vector set corresponding to each operation data set, where the operation vector set includes an operation vector for characterizing each operation data in the corresponding operation data set.
For example, a target product vector for characterizing a target product and a set of operation vectors for characterizing each operation data in each set of operation data may be obtained. Specifically, product information capable of describing the target product may be input into a vector generator or an encoder trained in advance to obtain a target product vector, where the product information may include a product name, a product ID, a category ID, a size, a specification, and the like of the target product. Further, for each operation data set, an operation vector for characterizing each operation data included in the operation data set may be sequentially determined, and then a specified number of operation vectors are grouped into an operation vector set corresponding to the operation data set, where the dimension of each operation vector is the same. For example, the knowledge-graph may be trained in advance, and the operation vector of each operation data may be determined by using the knowledge-graph, and each operation data may be input into a vector generator or an encoder (e.g., a one-hot encoder) trained in advance to obtain the operation vector of the operation data, which is not limited in this disclosure.
And 103, determining an aggregation vector corresponding to each operation vector set according to the plurality of operation vector sets by using the pre-trained capsule network.
And 104, determining a recognition result according to the aggregation vectors and the target product vector by using a pre-trained attention network, wherein the recognition result is used for indicating the feedback of the target object to the target product.
For example, a plurality of operation vector sets may be input into a Capsule Network (english: Capsule Network) trained in advance, so as to obtain an aggregation vector output by the Capsule Network and corresponding to each operation vector set. The capsule network can perform aggregation processing on each operation vector set to obtain aggregation vectors capable of representing the operation vector set, wherein the number of the aggregation vectors is the same as that of the operation vector sets, namely the number of the aggregation vectors is the same as that of the operation data sets. Because the capsule network can better learn the deep representation of the information and can learn the association relationship among the information, the aggregation vector can effectively represent the association relationship among a specified number of operation vectors in the corresponding operation vector set. That is, by integrating a specified number of operation vectors into one aggregate vector through the capsule network, the information contained in the corresponding operation vector set can be fully utilized, and the data size can be reduced. Then, a plurality of aggregation vectors and target product vectors can be input into a pre-trained Attention Network (English), so as to obtain a recognition result output by the Attention Network and used for indicating the feedback of the target object to the target product. The recognition result may be positive feedback or negative feedback. The attention network can learn the association relationship between each aggregation vector and the target product vector so as to determine whether the target object and the target product are in positive feedback or negative feedback. Therefore, the operation information sequence does not need to be cut off, the information cannot be lost, the operation information sequence does not need to be classified and then processed respectively, and the internal association between the information cannot be lost, so that the complete information included in the operation information sequence corresponding to the target object can be fully utilized, and the accuracy of the identification result is effectively improved. Furthermore, by integrating the operation vectors with the specified number into one aggregation vector, the data amount required to be processed can be effectively reduced, the speed of obtaining the identification result is improved, and the realizability of the embodiment is ensured.
The connection relationship between the capsule network and the attention network can be as shown in fig. 2, wherein a plurality of operation vector sets are used as the input of the capsule network, and the output of the capsule network is used as the input of the attention network together with a target product vector (not shown in fig. 2) to obtain the recognition result of the attention network output. Further, the capsule network and the attention network can be obtained by joint training with a large number of training samples.
In summary, the present disclosure first obtains an operation information sequence corresponding to a target object, and divides the operation information sequence into a plurality of operation data sets, where each operation data set includes a specified number of operation data. And then, obtaining a target product vector for representing the target product and an operation vector set corresponding to each operation data set. And finally, determining an identification result for indicating the feedback of the target object to the target product according to the plurality of aggregation vectors and the target product vector by using the pre-trained attention network. According to the method and the device, the operation information sequence is aggregated through the capsule network, the aggregated result and the target product are identified through the attention network, so that the feedback of the target object to the target product is determined, the information carried by the operation information sequence can be fully utilized, and the identification speed and accuracy are improved.
Fig. 3 is a flow chart illustrating another method of processing information according to an example embodiment, and as shown in fig. 3, step 102 may include:
step 1021, inputting the product information of the target product into a vector generator trained in advance to obtain the target product vector output by the vector generator.
In step 1022, for each operation data set, an operation vector for characterizing each operation data in the operation data set is determined by using a pre-established knowledge graph. Knowledge-graphs are used to characterize associations between various operational data.
For example, the product information of the target product may be input into a pre-trained vector Generator (english), and the vector Generator may be capable of extracting a target product vector capable of representing the target product from the product information. The product information may include, for example, a product name, a product ID, a category ID, a size, a specification, and the like of the target product.
For a specified number of operation data included in each operation data set, an operation vector corresponding to each operation data set may be sequentially determined by using a pre-established knowledge graph. Wherein the knowledge-graph is capable of characterizing associations between various operational data. In one implementation, a knowledge graph may be created that includes a plurality of nodes, each node representing an operation data, and at least one edge, each edge characterizing an association between two nodes at opposite ends of the edge. Further, the width or value of each edge may also characterize the associated attributes that the two nodes at the two ends of the edge have. In another implementation, multiple knowledgemaps may be established, characterizing operational data from multiple dimensions, such as a knowledgemap of a product ID dimension, a knowledgemap of a class ID dimension, and the like, may be established. Wherein each knowledge-graph comprises a plurality of nodes and at least one edge, each node represents a product ID (or a class ID), and each edge is used for representing that two nodes at two ends of the edge have an association therebetween. The operation vector is determined according to the knowledge Graph, for example, the operation vector for representing the operation data can be obtained by determining the vector corresponding to each node in the knowledge Graph by using a Graph Neural Network (GNN), a Graph Convolutional Neural Network (GCN), a Graph sage (GCN), or the like. The present disclosure does not specifically limit this.
In an application scenario, the implementation manner of step 103 may be:
and inputting the plurality of operation vector sets into the capsule network so that the capsule network aggregates each operation vector set to obtain an aggregation vector corresponding to each operation data set, wherein the aggregation vector is used for representing the incidence relation among the operation data included in the corresponding operation data set.
For example, a plurality of operation vector sets may be input into the capsule network, and each operation vector set is aggregated by the capsule network, so as to obtain an aggregation vector corresponding to each operation data set. Because the capsule network can better learn the deep representation of the information and can learn the incidence relation among the information, the aggregation vector can effectively represent the incidence relation among the operation vectors with the specified number in the corresponding operation vector set, namely, the operation vectors with the specified number are integrated into one aggregation vector through the capsule network, so that the information contained in the corresponding operation vector set can be fully utilized, and the data volume can be reduced.
Fig. 4 is a flow chart illustrating another method of processing information according to an example embodiment, as shown in fig. 4, an attention network includes: a plurality of self-attention layers, a bi-directional attention pooling layer, and an output layer. Step 104 may include the steps of:
step 1041, inputting the multiple aggregation vectors into multiple self-attention layers in a full-connected manner, so as to obtain an intermediate vector output by each self-attention layer.
Step 1042, inputting the target product vector and each intermediate vector output from the attention layer into the bidirectional attention pooling layer to obtain a target vector output from the bidirectional attention pooling layer, wherein the dimension of the intermediate vector is the same as the dimension of the target product vector.
And step 1043, inputting the target vector into the output layer to obtain an identification result output by the output layer.
For example, the Attention network may include a Bi-directional Attention localization layer (Bi-directional Attention localization), and a plurality of Self-Attention layers (Self-Attention), wherein the number of the Self-Attention layers may be preset or determined according to the number of the aggregation vectors. First, as shown in fig. 2, a plurality of aggregation vectors are input into a plurality of self-attention layers in a fully connected manner, and each self-attention layer outputs one intermediate vector, that is, the number of intermediate vectors is the same as the number of self-attention layers. The plurality of intermediate vectors may be understood as being capable of describing an association relationship between the operation data included in the operation information sequence from a plurality of dimensions. The target product vector and each intermediate vector output from the attention layer may then be input to a bidirectional attention pooling layer, which may determine an attention weight for the target product vector and each intermediate vector via an attention mechanism, and then perform a weighted summation of the target product vector and the plurality of intermediate vectors to obtain a target vector output by the bidirectional attention pooling layer. And finally, inputting the target vector into an output layer to obtain an identification result output by the output layer. For example, the output layer may determine matching probabilities of the target vector and positive feedback and negative feedback, respectively, determine that the recognition result is positive feedback if the matching probability with the positive feedback is large, and determine that the recognition result is negative feedback if the matching probability with the negative feedback is large.
In an application scenario, the implementation manner of step 1041 may include:
step 1) determining a target number of self-attention layers in a plurality of self-attention layers according to the number of operation data sets, wherein the target number is smaller than the number of the operation data sets.
And 2) inputting a plurality of aggregation vectors into a target number of target self-attention layers in a full-connection mode to obtain an intermediate vector output by each target self-attention layer.
Accordingly, the implementation of step 1042 may include:
and 3) inputting the target product vector and each intermediate vector output by the target self-attention layer into the bidirectional attention pooling layer, and performing weighted summation on the target product vector and the intermediate vectors according to attention weights by using the bidirectional attention pooling layer to obtain a target vector.
For example, to further reduce the amount of data that needs to be processed, the number of self-attention layers in the attention network may be adjusted based on the number of operational data sets. Specifically, the attention network includes a plurality of self-attention layers, and a target number of target self-attention layers may be selected according to the number of operation data sets, where the target number is smaller than the number of operation data sets, and a correspondence between the target number and the number of operation data sets may be established in advance. For example, when the number of operation data sets is 100, the target number is 10, and for another example, when the number of operation data sets is 20, the target number is 5. Correspondingly, a plurality of aggregation vectors are input into a target number of target self-attention layers in a full-connection mode, so that an intermediate vector output by each target self-attention layer is obtained. That is, a target number of intermediate vectors are obtained by a target number of target self-attention layers. And then inputting the target product vector and the target number of intermediate vectors into a bidirectional attention pooling layer, and performing weighted summation on the target product vector and the target number of intermediate vectors according to attention weights by using the bidirectional attention pooling layer to obtain a target vector. Since the target number is smaller than the number of operation data sets, the number of intermediate vectors to be processed by the bidirectional attention pooling layer is reduced, and the data amount to be processed can be further reduced.
For example, the operation information sequence includes 10000 operation data, and the operation information sequence may be divided into 100 operation data sets (each operation data set includes 100 operation data) according to the time sequence, so that 100 aggregation vectors may be obtained through steps 102 to 103. If the attention weight of the 100 operation vector sets is directly determined by using the attention mechanism, 100 times by 100 times of calculation is needed, and the calculation amount is large. If 10 (i.e., the target number) self-attention layers can be selected as the target self-attention layer from the plurality of self-attention layers included in the attention network, 10 intermediate vectors can be obtained. And then a target vector is obtained by utilizing the bidirectional attention pooling layer, 10 times (10 times 10) of calculation is needed, and the calculation amount is reduced by one order of magnitude, so that the identification speed can be improved.
Fig. 5 is a flowchart illustrating another information processing method according to an exemplary embodiment, and as shown in fig. 5, the method may further include:
and 105, if the identification result indicates that the target object is positively fed back to the target product, sending multimedia information for displaying the target product to the target object.
For example, if the recognition result output by the attention network indicates that the target object is a positive feedback for the target product, indicating that the target object performs a predetermined operation (e.g., clicking, collecting, sharing, purchasing, etc.) on the target product, the multimedia information for displaying the target product may be sent to the target object. The multimedia information may include a picture or a video showing the target product, a purchase link of the target product, a coupon of the target product, and the like, and the form and content of the multimedia information are not particularly limited in the present disclosure. If the target object is a release platform, the multimedia information can be displayed on the release platform, if the target object is a release area, the multimedia information can be displayed in the release area, and if the target object is a terminal device, the multimedia information can be displayed on a display interface of the terminal device.
Fig. 6 is a flowchart of a method for jointly training a capsule network and an attention network, and as shown in fig. 6, the capsule network and the attention network are obtained by joint training as follows:
step A, obtaining a sample input set and a sample output set, wherein the sample input set comprises: a plurality of sample inputs, each sample input comprising a sequence of operational information for a sample object and a sample product, a set of sample outputs comprising a sample output corresponding to each sample input, each sample output comprising feedback of the corresponding sample object on the sample product.
And step B, taking the sample input set as the input of the capsule network, taking the output of the capsule network and a sample product vector for representing a sample product as the input of the attention network, and taking the sample output set as the output of the attention network so as to jointly train the capsule network and the attention network.
For example, when training the capsule network and the attention network in the above embodiments, a sample input set and a sample output set are acquired. Wherein the sample input set includes a plurality of sample inputs, each sample input including an operation information sequence of a sample object, and a sample product. The plurality of sample objects may include a plurality of positive sample objects and a plurality of negative sample objects, the feedback made by the positive sample objects to the sample product is positive feedback, and the feedback made by the negative sample objects to the sample product is negative feedback. Further, the ratio of the number of positive sample objects to the number of negative sample objects may also be controlled (e.g., may be 1: 1).
The set of sample outputs includes a sample output corresponding to each sample input, each sample output including feedback of the corresponding sample object to the sample product. That is, the sample output corresponding to the positive sample object is positive feedback (which can be represented as 1), and the sample output corresponding to the negative sample object is negative feedback (which can be represented as 0).
Then, the input set of samples is used as the input of the capsule network, the output of the capsule network and the sample product vector for characterizing the sample product are used as the input of the attention network, and the output set of samples is used as the output of the attention network, so as to jointly train the capsule network and the attention network. So that the output of the attention network, when the input set of samples is input, can be matched with the output set of samples.
Specifically, the operation information sequence of the sample object included in the sample input set may be divided into a plurality of operation data sets of the sample object, and then a sample product vector for characterizing the sample product and an operation vector set corresponding to each operation data set may be obtained. Then, the plurality of operation vector sets of the sample objects are input into the capsule network to obtain an aggregation vector of the plurality of sample objects output by the capsule network. Furthermore, the aggregation vectors of the plurality of sample objects and the sample product vectors are input into the attention network to obtain the output of the attention network. The loss functions of the capsule network and the attention network may be determined from the output of the attention network and the sample output set to modify parameters of neurons in the capsule network and the attention network, such as weights (in English) and offsets (in English) of the neurons, with the goal of reducing the loss functions. And repeating the steps until the loss function meets a preset condition, for example, the loss function is smaller than a preset loss threshold.
Specifically, for the capsule network, a loss function of the capsule network can be determined according to the interval loss and the reconstruction loss, and parameters of neurons in the capsule network are corrected by using a dynamic routing mechanism, and the dynamic routing mechanism can measure the similarity among a plurality of vectors included in an operation vector set, so that the trained capsule network can better learn deep representation of information, and meanwhile, the association relationship among the information can be learned. For an attention network, a cross entropy loss function may be utilized to determine a loss function of the attention network and a back propagation algorithm may be utilized to modify parameters of neurons in the attention network.
In summary, the present disclosure first obtains an operation information sequence corresponding to a target object, and divides the operation information sequence into a plurality of operation data sets, where each operation data set includes a specified number of operation data. And then, obtaining a target product vector for representing the target product and an operation vector set corresponding to each operation data set. And finally, determining an identification result for indicating the feedback of the target object to the target product according to the plurality of aggregation vectors and the target product vector by using the pre-trained attention network. According to the method and the device, the operation information sequence is aggregated through the capsule network, the aggregated result and the target product are identified through the attention network, so that the feedback of the target object to the target product is determined, the information carried by the operation information sequence can be fully utilized, and the identification speed and accuracy are improved.
Fig. 7 is a block diagram illustrating an apparatus for processing information according to an exemplary embodiment, and as shown in fig. 7, the apparatus 200 includes:
the sequence acquiring module 201 is configured to acquire an operation information sequence corresponding to a target object, and divide the operation information sequence into a plurality of operation data sets, where each operation data set includes a specified number of operation data.
The vector obtaining module 202 is configured to obtain a target product vector for characterizing a target product, and obtain an operation vector set corresponding to each operation data set, where the operation vector set includes an operation vector for characterizing each operation data in the corresponding operation data set.
The first processing module 203 is configured to determine, by using the pre-trained capsule network, an aggregation vector corresponding to each operation vector set according to the multiple operation vector sets.
And the second processing module 204 is configured to determine, by using the pre-trained attention network, a recognition result according to the plurality of aggregation vectors and the target product vector, where the recognition result is used to indicate feedback of the target object to the target product.
Fig. 8 is a block diagram illustrating another information processing apparatus according to an exemplary embodiment, and as shown in fig. 8, the vector obtaining module 202 may include:
the first obtaining sub-module 2021 is configured to input the product information of the target product into a pre-trained vector generator to obtain a target product vector output by the vector generator.
The second obtaining sub-module 2022 is configured to determine, for each operation data set, an operation vector for characterizing each operation data in the operation data set by using a pre-established knowledge graph. Knowledge-graphs are used to characterize associations between various operational data.
In an application scenario, the first processing module 203 is configured to:
and inputting the plurality of operation vector sets into the capsule network so that the capsule network aggregates each operation vector set to obtain an aggregation vector corresponding to each operation data set, wherein the aggregation vector is used for representing the incidence relation among the operation data included in the corresponding operation data set.
Fig. 9 is a block diagram illustrating another information processing apparatus according to an exemplary embodiment, and as shown in fig. 9, an attention network includes: a plurality of self-attention layers, a bi-directional attention pooling layer, and an output layer. The second processing module 204 may include:
the input submodule 2041 is configured to input the multiple aggregation vectors into multiple self-attention layers in a fully connected manner, so as to obtain an intermediate vector output from each self-attention layer.
The pooling sub-module 2042 is configured to input the target product vector and each intermediate vector output from the attention layer into the bidirectional attention pooling layer to obtain a target vector output by the bidirectional attention pooling layer, where a dimension of the intermediate vector is the same as a dimension of the target product vector.
The output submodule 2043 is configured to input the target vector into the output layer to obtain a recognition result output by the output layer.
In one application scenario, the input submodule 2041 may be configured to perform the following steps:
step 1) determining a target number of self-attention layers in a plurality of self-attention layers according to the number of operation data sets, wherein the target number is smaller than the number of the operation data sets.
And 2) inputting a plurality of aggregation vectors into a target number of target self-attention layers in a full-connection mode to obtain an intermediate vector output by each target self-attention layer.
The pooling sub-module 2042 may be used to perform the following steps:
and 3) inputting the target product vector and each intermediate vector output by the target self-attention layer into the bidirectional attention pooling layer, and performing weighted summation on the target product vector and the intermediate vectors according to attention weights by using the bidirectional attention pooling layer to obtain a target vector.
Fig. 10 is a block diagram illustrating another information processing apparatus according to an exemplary embodiment, and as shown in fig. 10, the apparatus 200 may further include:
and the sending module 205 is configured to send multimedia information for displaying the target product to the target object if the identification result indicates that the target object is positively fed back to the target product.
In one implementation, the capsule network and the attention network are obtained by joint training as follows:
step A, obtaining a sample input set and a sample output set, wherein the sample input set comprises: a plurality of sample inputs, each sample input comprising a sequence of operational information for a sample object and a sample product, a set of sample outputs comprising a sample output corresponding to each sample input, each sample output comprising feedback of the corresponding sample object on the sample product.
And step B, taking the sample input set as the input of the capsule network, taking the output of the capsule network and a sample product vector for representing a sample product as the input of the attention network, and taking the sample output set as the output of the attention network so as to jointly train the capsule network and the attention network.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In summary, the present disclosure first obtains an operation information sequence corresponding to a target object, and divides the operation information sequence into a plurality of operation data sets, where each operation data set includes a specified number of operation data. And then, obtaining a target product vector for representing the target product and an operation vector set corresponding to each operation data set. And finally, determining an identification result for indicating the feedback of the target object to the target product according to the plurality of aggregation vectors and the target product vector by using the pre-trained attention network. According to the method and the device, the operation information sequence is aggregated through the capsule network, the aggregated result and the target product are identified through the attention network, so that the feedback of the target object to the target product is determined, the information carried by the operation information sequence can be fully utilized, and the identification speed and accuracy are improved.
FIG. 11 is a block diagram illustrating an electronic device 300 in accordance with an example embodiment. As shown in fig. 11, the electronic device 300 may include: a processor 301 and a memory 302. The electronic device 300 may also include one or more of a multimedia component 303, an input/output (I/O) interface 304, and a communication component 305.
The processor 301 is configured to control the overall operation of the electronic device 300, so as to complete all or part of the steps in the information processing method. The memory 302 is used to store various types of data to support operation at the electronic device 300, such as instructions for any application or method operating on the electronic device 300 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 302 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 303 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 302 or transmitted through the communication component 305. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 304 provides an interface between the processor 301 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 305 is used for wired or wireless communication between the electronic device 300 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 305 may therefore include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-mentioned information Processing methods.
In another exemplary embodiment, there is also provided a computer-readable storage medium including program instructions which, when executed by a processor, implement the steps of the above-described information processing method. For example, the computer readable storage medium may be the memory 302 including program instructions executable by the processor 301 of the electronic device 300 to perform the information processing method described above.
Fig. 12 is a block diagram illustrating an electronic device 400 according to an example embodiment. For example, the electronic device 400 may be provided as a server. Referring to fig. 12, the electronic device 400 includes a processor 422, which may be one or more in number, and a memory 432 for storing computer programs executable by the processor 422. The computer program stored in memory 432 may include one or more modules that each correspond to a set of instructions. Further, the processor 422 may be configured to execute the computer program to perform the above-described information processing method.
Additionally, electronic device 400 may also include a power component 426 and a communication component 450, the power component 426 may be configured to perform power management of the electronic device 400, and the communication component 450 may be configured to enable communication, e.g., wired or wireless communication, of the electronic device 400. The electronic device 400 may also include input/output (I/O) interfaces 458. The electronic device 400 may operate based on an operating system stored in the memory 432, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, and the like.
In another exemplary embodiment, there is also provided a computer-readable storage medium including program instructions which, when executed by a processor, implement the steps of the above-described information processing method. For example, the computer readable storage medium may be the memory 432 including program instructions executable by the processor 422 of the electronic device 400 to perform the above-described information processing method.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned method of processing information when executed by the programmable apparatus.
Preferred embodiments of the present disclosure are described in detail above with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and other embodiments of the present disclosure may be easily conceived by those skilled in the art within the technical spirit of the present disclosure after considering the description and practicing the present disclosure, and all fall within the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. Meanwhile, any combination can be made between various different embodiments of the disclosure, and the disclosure should be regarded as the disclosure of the disclosure as long as the combination does not depart from the idea of the disclosure. The present disclosure is not limited to the precise structures that have been described above, and the scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for processing information, the method comprising:
acquiring an operation information sequence corresponding to a target object, and dividing the operation information sequence into a plurality of operation data sets, wherein each operation data set comprises a specified number of operation data;
obtaining a target product vector for representing a target product, and obtaining an operation vector set corresponding to each operation data set, wherein the operation vector set comprises an operation vector for representing each operation data in the corresponding operation data set;
determining an aggregation vector corresponding to each operation vector set according to a plurality of operation vector sets by utilizing a pre-trained capsule network;
and determining a recognition result according to the aggregation vectors and the target product vector by utilizing a pre-trained attention network, wherein the recognition result is used for indicating the feedback of the target object to the target product.
2. The method of claim 1, wherein the obtaining a target product vector for characterizing a target product comprises:
inputting the product information of the target product into a pre-trained vector generator to obtain the target product vector output by the vector generator;
the obtaining of the operation vector set corresponding to each operation data set includes:
for each operation data set, determining an operation vector for characterizing each operation data in the operation data set by using a pre-established knowledge graph; the knowledge graph is used to characterize associations between various operational data.
3. The method of claim 1, wherein determining, using a pre-trained capsule network, an aggregate vector for each set of operation vectors from a plurality of the sets of operation vectors comprises:
inputting a plurality of operation vector sets into the capsule network, so that the capsule network aggregates each operation vector set to obtain the aggregation vector corresponding to each operation data set, wherein the aggregation vector is used for representing the association relationship among the operation data included in the corresponding operation data set.
4. The method of claim 1, wherein the attention network comprises: a plurality of self-attentional layers, bi-directional attentional pooling layers and output layers; determining, by using a pre-trained attention network, a recognition result according to the plurality of aggregation vectors and the target product vector, including:
inputting a plurality of the aggregation vectors into a plurality of the self-attention layers in a fully-connected mode to obtain an intermediate vector output by each self-attention layer;
inputting the target product vector and each intermediate vector output from the attention layer into the bidirectional attention pooling layer to obtain a target vector output by the bidirectional attention pooling layer, wherein the dimension of the intermediate vector is the same as that of the target product vector;
and inputting the target vector into the output layer to obtain the identification result output by the output layer.
5. The method of claim 4, wherein said inputting a plurality of said aggregated vectors into a plurality of said self-attention layers in a fully-connected manner to obtain an intermediate vector output from each of said self-attention layers comprises:
according to the number of the operation data sets, determining a target number of self-attention layers in a plurality of self-attention layers, wherein the target number is smaller than the number of the operation data sets;
inputting a plurality of aggregation vectors into a target number of target self-attention layers in a full-connection mode to obtain the intermediate vector output by each target self-attention layer;
the inputting the target product vector and each intermediate vector output from the attention layer into the bidirectional attention pooling layer to obtain a target vector output from the bidirectional attention pooling layer includes:
and inputting the target product vector and the intermediate vector output by each target self-attention layer into the bidirectional attention pooling layer, so as to perform weighted summation on the target product vector and the intermediate vectors according to attention weights by using the bidirectional attention pooling layer, thereby obtaining the target vector.
6. The method according to any one of claims 1-5, further comprising:
and if the identification result indicates that the target object is positively fed back to the target product, sending multimedia information for displaying the target product to the target object.
7. The method according to any one of claims 1-5, wherein the capsule network and the attention network are co-trained by:
obtaining a sample input set and a sample output set, the sample input set comprising: a plurality of sample inputs, each of the sample inputs including a sample product and a sequence of operational information for a sample object, a sample output corresponding to each of the sample inputs being included in the set of sample outputs, each of the sample outputs including feedback for the sample product by the corresponding sample object;
using the set of sample inputs as inputs to the capsule network, using outputs of the capsule network and sample product vectors characterizing the sample products as inputs to the attention network, and using the set of sample outputs as outputs of the attention network, to jointly train the capsule network and the attention network.
8. An apparatus for processing information, the apparatus comprising:
the system comprises a sequence acquisition module, a data processing module and a data processing module, wherein the sequence acquisition module is used for acquiring an operation information sequence corresponding to a target object and dividing the operation information sequence into a plurality of operation data sets, and each operation data set comprises a specified number of operation data;
the vector acquisition module is used for acquiring a target product vector for representing a target product and acquiring an operation vector set corresponding to each operation data set, wherein the operation vector set comprises an operation vector for representing each operation data in the corresponding operation data set;
the first processing module is used for determining an aggregation vector corresponding to each operation vector set according to a plurality of operation vector sets by utilizing a pre-trained capsule network;
and the second processing module is used for determining a recognition result according to the aggregation vectors and the target product vector by utilizing a pre-trained attention network, wherein the recognition result is used for indicating the feedback of the target object to the target product.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
CN202111152863.7A 2021-09-29 2021-09-29 Information processing method and device, storage medium and electronic equipment Pending CN113962292A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111152863.7A CN113962292A (en) 2021-09-29 2021-09-29 Information processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111152863.7A CN113962292A (en) 2021-09-29 2021-09-29 Information processing method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113962292A true CN113962292A (en) 2022-01-21

Family

ID=79463268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111152863.7A Pending CN113962292A (en) 2021-09-29 2021-09-29 Information processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113962292A (en)

Similar Documents

Publication Publication Date Title
CN109816039B (en) Cross-modal information retrieval method and device and storage medium
US8756184B2 (en) Predicting users' attributes based on users' behaviors
CN110598157B (en) Target information identification method, device, equipment and storage medium
CN109509010B (en) Multimedia information processing method, terminal and storage medium
CN109474542B (en) Message push request flow control method, device and medium based on business rules
CN103761254A (en) Method for matching and recommending service themes in various fields
CN110413867B (en) Method and system for content recommendation
CN112801719A (en) User behavior prediction method, user behavior prediction device, storage medium, and apparatus
CN109214543B (en) Data processing method and device
CN109753275B (en) Recommendation method and device for application programming interface, storage medium and electronic equipment
CN115130711A (en) Data processing method and device, computer and readable storage medium
CN112214677A (en) Interest point recommendation method and device, electronic equipment and storage medium
CN107944026A (en) A kind of method, apparatus, server and the storage medium of atlas personalized recommendation
CN112000803B (en) Text classification method and device, electronic equipment and computer readable storage medium
CN114139046B (en) Object recommendation method and device, electronic equipment and storage medium
CN113672807B (en) Recommendation method, recommendation device, recommendation medium, recommendation device and computing equipment
CN113409096B (en) Target object identification method and device, computer equipment and storage medium
CN113962292A (en) Information processing method and device, storage medium and electronic equipment
CN112860999B (en) Information recommendation method, device, equipment and storage medium
CN111615178B (en) Method and device for identifying wireless network type and model training and electronic equipment
CN113761289A (en) Method, frame, computer system and readable storage medium for drawing learning
CN111432080A (en) Ticket data processing method, electronic equipment and computer readable storage medium
CN116881483B (en) Multimedia resource recommendation method, device and storage medium
CN116911304B (en) Text recommendation method and device
CN113283115B (en) Image model generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination