Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the development of mobile networks, more businesses can promote their products to mobile end users by pushing messages. In a related push method, before pushing a message to a mobile end user, a batch of push messages need to be recalled from a push information base as candidate information, and then one or more messages are selected from the candidate information and pushed to the mobile end user. However, in the related push method, the accuracy of matching the candidate message recalled from the push information base with the user interest still needs to be improved.
For example, in a related push approach, recalls for candidate messages are typically based on collaborative filtering recalls or vectorized recalls. Wherein the collaborative filtering is divided into content-based collaborative filtering and user-based collaborative filtering. Content-based collaborative filtering considers recalls of content similar to content previously viewed by the user; the user-based collaborative filtering is to find users similar to the target user and then try to use the pushed content that the similar users viewed but the target user did not view as a candidate set. Although the collaborative filtering method recalls the related pushed contents through the historical behaviors of the user, although the content relevance is ensured, the method cannot be used for retrieving the full pushed contents or the user due to the huge number of the full pushed contents, so that the recall lacks diversity.
In another related pushing method, the vectorization recall learns the vectorization expression of the user and the related content through the model, calculates the similarity between the vectorization expression of the user and the vectorization expression of the content through an inner product, and recalls the content with the similarity higher than the threshold value to generate a candidate set. However, the vectorization recall method only depends on similarity calculation to recall, which ignores the monitoring signal, especially in a scene with fast content iteration update, so that the recalled content is not highly matched with the user interest, and further the recall information accuracy is not high.
Therefore, the inventors have found the above problems in their research and have proposed a data processing method, an apparatus, and a server that can improve the above problems in the present application. The data processing method includes the steps of firstly obtaining a first embedded vector representing historical push content browsed by a user, a second embedded vector representing an application program used by the user and a third embedded vector representing user attribute information, then splicing the first embedded vector, the second embedded vector and the third embedded vector to obtain an embedded vector to be processed, then inputting the embedded vector to be processed and an embedded vector of alternative push content into a target neural network model, and obtaining probability corresponding to the alternative push content output by the target neural network model. Therefore, when the embedded vectors corresponding to the representation user interests are calculated, the embedded vectors of all the alternative pushed contents can be input into the target neural network model based on a limit multi-classification mode, the probability that all the alternative pushed contents correspond to the same user can be obtained, the diversity of the recalled alternative pushed contents is realized, and the accuracy of the recalled alternative pushed contents is improved.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a data processing method provided in the embodiment of the present application is applied to a server, and the method includes:
s110: the method comprises the steps of obtaining a first embedding vector, a second embedding vector and a third embedding vector, wherein the first embedding vector is an embedding vector for representing historical push content viewed by a user, the second embedding vector is an embedding vector for representing an application program used by the user, and the third embedding vector is an embedding vector for representing attribute information of the user.
It should be noted that what kind of content is specifically interesting to the user can be identified by detecting the push content viewed by the user and the application used by the user, and what kind of content is interesting to the user can be identified by acquiring the user attribute information. The attribute information may include, among other things, the age, gender, and location information of the user.
In the embodiment of the present application, the first embedded vector is an embedded vector that characterizes historical pushed content viewed by a user, and accordingly, there are various ways to obtain the first embedded vector.
As a mode, obtaining a plurality of historical pushed content sequences corresponding to a user, wherein each historical pushed content sequence comprises a plurality of historical pushed contents which are read by the user according to a time sequence; acquiring an embedded vector corresponding to each historical push content sequence to obtain a plurality of first sub-embedded vectors; and carrying out weighted average on the plurality of first sub-embedding vectors to obtain the first embedding vector.
The inventor finds that the viewing order of the pushed contents can reflect the viewing habits or interests of the user to some extent in the viewing process of the pushed contents. Further, in the process of acquiring the history push content viewed by the user, more information can be acquired by simultaneously acquiring what the history push content is specifically viewed and the viewing order of the history push content, and the interest of the user can be acquired more accurately. In this aspect, the history push content viewed by the user in a specified time period can be treated as one sequence, and a plurality of history push content sequences can be acquired. Alternatively, the specified period of time may be 1 hour, or 1 day, or even 1 week.
Illustratively, if it is detected that the user browses the push content a, the push content B, and the push content C within a specified time period, the push content a1, the push content B1, and the push content C1 may be taken as the history push content sequence L1. Further, when it is detected that the user has viewed the push content a2, the push content B2, and the push content C2 in the specified time period, the push content a2, the push content B2, and the push content C2 are regarded as one history push content sequence L2, and when it is detected that the user has viewed the push content A3, the push content B3, and the push content C3 in the specified time period, the push content A3, the push content B3, and the push content C3 are regarded as one history push content sequence L3. Correspondingly, the server may generate a first sub-embedding vector corresponding to the history push content sequence L1, generate a first sub-embedding vector corresponding to the history push content sequence L2 and generate a first sub-embedding vector corresponding to the history push content sequence L3, and then perform weighted average on the first sub-embedding vector corresponding to the history push content sequence L1, the first sub-embedding vector corresponding to the history push content sequence L2 and the first sub-embedding vector corresponding to the history push content sequence L3 to obtain a first embedding vector representing the history push content viewed by the user.
Alternatively, a corresponding embedded vector may be separately generated for each historical pushed content viewed by the user, and then the embedded vectors corresponding to each historical pushed content are weighted and averaged to obtain a second embedded vector representing the historical pushed content viewed by the user. For example, if it is detected that the user browses the push content a1, the push content B1, the push content C1, and the push content a2, corresponding embedding vectors are generated for the push content a1, the push content B1, the push content C1, and the push content a2, and the embedding vectors of the push content a1, the push content B1, the push content C1, and the push content a2 are weighted and averaged to obtain a second embedding vector representing an application used by the user.
Similarly, in the process of acquiring the embedded vector representing the application used by the user, not only which applications the user has used but also the usage sequence of the applications by the user can acquire more information and more interest of the user in the use of the applications. Further, the acquisition may be performed based on the above-described sequence-based method, or may be performed based on a single used application.
As a mode, a plurality of application program use sequences corresponding to a user may be obtained, where each application program use sequence includes a plurality of application programs used by the user in a time sequence; acquiring an embedded vector corresponding to each application program use sequence to obtain a plurality of second sub-embedded vectors; and carrying out weighted average on the plurality of second sub-embedding vectors to obtain the second embedding vector.
For example, if it is detected that the user uses the application a, the application B, and the application C within a specified time period, the application a1, the application B1, and the application C1 may be used as the application use sequence L4. Further, if it is detected that the user used the application a2, the application B2, and the application C2 for the specified period of time, the application a2, the application B2, and the application C2 are used as the one-application use sequence L5, and if it is detected that the user used the application A3, the application B3, and the application C3 for the specified period of time, the application A3, the application B3, and the application C3 are used as the one-application use sequence L6. Correspondingly, the server may generate a second sub-embedding vector corresponding to the application usage sequence L4, generate a second sub-embedding vector corresponding to the application usage sequence L5 and generate a second sub-embedding vector corresponding to the application usage sequence L6, and then perform weighted averaging on the second sub-embedding vector corresponding to the application usage sequence L4, the second sub-embedding vector corresponding to the application usage sequence L5 and the second sub-embedding vector corresponding to the application usage sequence L6 to obtain a second embedding vector representing the historical application viewed by the user.
Alternatively, a corresponding embedding vector may be separately generated for each application used by the user, and then the respective embedding vectors for each application may be weighted and averaged to obtain a second embedding vector that characterizes historical applications viewed by the user. For example, if it is detected that the user browses the application a1, the application B1, the application C1, and the application a2, corresponding embedding vectors are generated for the application a1, the application B1, the application C1, and the application a2, and the embedding vectors for the application a1, the application B1, the application C1, and the application a2 are weighted and averaged to obtain a second embedding vector representing a history of applications browsed by the user.
As a mode, acquiring age, gender and position information of a user to form attribute information of the user; and acquiring an embedded vector corresponding to the attribute information as the third embedded vector. Optionally, the obtaining of the age, gender, and location information of the user constitutes attribute information of the user, including: acquiring the age and the gender of a user; acquiring a plurality of pieces of detected position information corresponding to the user, and taking the position information with the maximum number of reporting times of the plurality of pieces of position information as the position information of the user; and acquiring the age, the sex and the position information of the user to form attribute information of the user. It should be noted that the location information of the user may be collected by an electronic device carried by the user, and the user may move at a plurality of locations within a period of time, so that the location information correspondingly reported by the same user may be detected in order to more accurately obtain the location information of the user, and then one location information with the largest number of reporting times is used as the location information of the user. Illustratively, the obtained location information reported by the user u within a period of time includes a location 1, a location 2, and a location 3, where the location 1 is reported 10 times, the location 2 is reported 1 time, and the location 3 is reported 1 time, and then the location 1 is used as the location information of the user u.
S120: and splicing the first embedded vector, the second embedded vector and the third embedded vector to obtain an embedded vector to be processed, wherein the embedded vector to be processed is an embedded vector representing the interest of a user.
In this embodiment, in order to enable the first embedding vector, the second embedding vector, and the third embedding vector to be referred to simultaneously when retrieving the recalled alternative push content, in this embodiment, the first embedding vector, the second embedding vector, and the third embedding vector are spliced to obtain the to-be-processed embedding vector representing the interest of the user. For example, if the first embedded vector is [ a1, a2, a3, a4], the second embedded vector is [ b1, b2, b3, b4], and the third embedded vector is [ c1, c2, c3, c4], the vectors to be processed obtained by splicing the first embedded vector, the second embedded vector, and the third embedded vector have the following form:
[a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4]
as can be seen from the embedded vectors obtained by the above splicing, in this embodiment, the embedded vectors are spliced, and it can be understood that the embedded vectors used for splicing are connected end to end.
S130: and inputting the embedding vector to be processed and the embedding vector of the alternative pushed content into a target neural network model to obtain the probability corresponding to the alternative pushed content output by the target neural network model, wherein the alternative pushed content with larger corresponding probability is matched with the user more.
In this embodiment, as one way, the alternative push content input into the target neural network model may be a full amount of alternative push content. The total amount of alternative push contents can be understood as all alternative push contents in the push content library, so that the target neural network model can output the probability that each alternative push content in the total amount of alternative push contents corresponds to the same user.
S140: and taking the alternative push contents with the corresponding probability larger than the probability threshold value in the alternative push contents as recalled alternative push contents so as to select the push contents pushed to the user from the recalled alternative push contents.
It should be noted that, after obtaining the probability corresponding to each alternative pushed content, it may be further determined which alternative pushed content may be recalled according to a comparison result between the probability corresponding to each alternative pushed content and a probability threshold. And the alternative push contents recalled by the server are not all directly pushed to the user, but are pushed to the user after being screened again.
In the data processing method provided in this embodiment, a first embedded vector that represents historical push content viewed by a user, a second embedded vector that represents an application used by the user, and a third embedded vector that represents user attribute information are obtained, and then the first embedded vector, the second embedded vector, and the third embedded vector are spliced to obtain an embedded vector to be processed, and then the embedded vector to be processed and the embedded vector of candidate push content are input to a target neural network model to obtain a probability corresponding to candidate push content output by the target neural network model. Therefore, when the embedded vectors corresponding to the representation user interests are calculated, the embedded vectors of all the alternative pushed contents can be input into the target neural network model based on a limit multi-classification mode, the probability that all the alternative pushed contents correspond to the same user can be obtained, the diversity of the recalled alternative pushed contents is realized, and the accuracy of the recalled alternative pushed contents is improved.
Referring to fig. 2, a data processing method provided in the embodiment of the present application is applied to a server, and the method includes:
s210: the method comprises the steps of obtaining a first embedding vector, a second embedding vector and a third embedding vector for model training, wherein the first embedding vector is an embedding vector for representing historical push contents viewed by a user, the second embedding vector is an embedding vector for representing an application program used by the user, and the third embedding vector is an embedding vector for representing attribute information of the user.
S220: and splicing the first embedded vector, the second embedded vector and the third embedded vector to obtain a to-be-processed embedded vector for model training, wherein the to-be-processed embedded vector is an embedded vector representing the interest of a user.
S230: and acquiring a fourth embedding vector for training and a fifth embedding vector, wherein the fourth embedding vector is an embedding vector for representing historical push content which is not viewed by a user, and the fifth embedding vector is an embedding vector for representing an application program which is not used by the user.
S240: and inputting the embedding vector to be processed, the fourth embedding vector, the fifth embedding vector and the embedding vector of the alternative push content into a neural network model to be trained, and training the neural network model to be trained to obtain a target neural network model.
As one mode, the target neural network comprises a plurality of fully-connected layers and a normalization layer which are arranged in sequence; the inputting the embedding vector to be processed and the embedding vector of the alternative push content into a target neural network model to obtain the probability corresponding to the alternative push content output by the target neural network model comprises: and sequentially processing the embedded vector to be processed and the embedded vector of the alternative push content through the multi-layer full-connection layer and the normalization layer to obtain the probability corresponding to the alternative push content output by the normalization layer.
As a mode, the full connection layers include a first full connection layer, a second full connection layer, and a third full connection layer, which are sequentially arranged, wherein the number of neurons included in the first full connection layer is greater than the number of neurons included in the first full connection layer, the number of neurons included in the second full connection layer is greater than the number of neurons included in the third full connection layer, and the method further includes:
and in the training process of each layer of the full connection layer, activating based on the ReLU unit, and converging based on a Batch norm and a Drop out mode.
The processing of this model is described below with reference to fig. 3.
As shown in fig. 3, after weighting the embedded vector of each historical push content sequence, a first embedded vector is obtained, a second embedded vector and a third embedded vector are obtained based on the introduction content, the first embedded vector, the second embedded vector and the third embedded vector are spliced to obtain an embedded vector to be processed, the embedded vector to be processed is first input to a first full-link layer, the output of the first full-link layer is input to a second full-link layer, the output of the second full-link layer is input to a third full-link layer, a processed user interest vector is further obtained, the processed user interest vector and the embedded vector of the full-amount of candidate push content are input to a normalization layer, and then the probability of each push candidate content is obtained by using the multi-classification function of the normalization layer. Furthermore, in this embodiment, the number of the total amount of the candidate push contents is large, and further, when the probability is calculated based on the multi-classification function of the normalization layer, it can be understood that the probability of each candidate push content is calculated in a manner of multiple classifications at the limit.
Optionally, the first fully-connected layer, the second fully-connected layer, and the third fully-connected layer may be used to perform the functions of dimensionality reduction and feature extraction on the vector, the number of neurons in the first fully-connected layer may be 256, the number of neurons in the second fully-connected layer may be 128, and the number of neurons in the third fully-connected layer may be 64.
S250: the method comprises the steps of obtaining a first embedding vector, a second embedding vector and a third embedding vector, wherein the first embedding vector is an embedding vector for representing historical push content viewed by a user, the second embedding vector is an embedding vector for representing an application program used by the user, and the third embedding vector is an embedding vector for representing attribute information of the user.
S260: and splicing the first embedded vector, the second embedded vector and the third embedded vector to obtain an embedded vector to be processed, wherein the embedded vector to be processed is an embedded vector representing the interest of a user.
S270: and inputting the embedding vector to be processed and the embedding vector of the alternative pushed content into a target neural network model to obtain the probability corresponding to the alternative pushed content output by the target neural network model, wherein the alternative pushed content with larger corresponding probability is matched with the user more.
As one way, the target neural network model may obtain the probability corresponding to the alternative push content based on the following formula:
and the target PUSH is characterized by the probability corresponding to the output alternative PUSH content.
The embedded vector to be processed is characterized,
an embedded vector characterizing a full amount of alternative push content.
S280: and taking the alternative push contents with the corresponding probability larger than the probability threshold value in the alternative push contents as recalled alternative push contents so as to select the push contents pushed to the user from the recalled alternative push contents.
According to the data processing method provided by the embodiment, when the embedded vectors corresponding to the representative user interests are calculated, all the embedded vectors of the alternative push contents can be input into the target neural network model based on a limit multi-classification mode, so that the probability that all the alternative push contents correspond to the same user can be obtained, the diversity of the recalled alternative push contents is realized, and the accuracy of the recalled alternative push contents is improved. And in the present embodiment, the extreme multi-classification is trained within an acceptable iteration period by considering the negative sampling technique proposed in Word2 Vec. As each Push is taken as a class, the supervision information can be utilized to the maximum extent, and the recall accuracy is improved while the recall diversity is ensured. Furthermore, in this embodiment, historical pushed content that a user has recently viewed may be used as a monitoring signal of the model to be trained, and the probability that the target neural network model obtained by training may output the candidate pushed content that the user has recently interested in is higher.
Referring to fig. 4, in an embodiment of the present application, a data processing apparatus 300 operating on a server is provided, where the apparatus 300 includes:
an embedded vector obtaining unit 310 is configured to obtain a first embedded vector, a second embedded vector, and a third embedded vector, where the first embedded vector is an embedded vector that characterizes historical push content viewed by a user, the second embedded vector is an embedded vector that characterizes an application used by the user, and the third embedded vector is an embedded vector that characterizes attribute information of the user.
As one mode, the embedded vector obtaining unit 310 is specifically configured to obtain a plurality of historical pushed content sequences corresponding to a user, where each of the historical pushed content sequences includes a plurality of historical pushed contents that the user has viewed in a time sequence; acquiring an embedded vector corresponding to each historical push content sequence to obtain a plurality of first sub-embedded vectors; and carrying out weighted average on the plurality of first sub-embedding vectors to obtain the first embedding vector.
The embedded vector obtaining unit 310 is further specifically configured to obtain a plurality of application program usage sequences corresponding to the user, where each application program usage sequence includes a plurality of application programs used by the user in a time sequence; acquiring an embedded vector corresponding to each application program use sequence to obtain a plurality of second sub-embedded vectors; and carrying out weighted average on the plurality of second sub-embedding vectors to obtain the second embedding vector.
The embedded vector obtaining unit 310 is further specifically configured to obtain attribute information of the user, which is composed of age, gender, and location information of the user; and acquiring an embedded vector corresponding to the attribute information as the third embedded vector.
Optionally, the vector obtaining unit 310 is further specifically configured to obtain an age and a gender of the user; acquiring a plurality of pieces of detected position information corresponding to the user, and taking the position information with the maximum number of reporting times of the plurality of pieces of position information as the position information of the user; and acquiring the age, the sex and the position information of the user to form attribute information of the user.
The vector splicing unit 320 is configured to splice the first embedded vector, the second embedded vector, and the third embedded vector to obtain an embedded vector to be processed, where the embedded vector to be processed is an embedded vector representing user interest.
A vector processing unit 330, configured to input the to-be-processed embedded vector and the embedded vector of the candidate pushed content into a target neural network model, to obtain a probability corresponding to the candidate pushed content output by the target neural network model, where the candidate pushed content with the higher probability is more matched with the user.
A content recall unit 340, configured to take, as recalled alternative push content, an alternative push content with a probability greater than a probability threshold in the alternative push contents, so as to select, from the recalled alternative push contents, a push content pushed to the user.
As a mode, the data processing apparatus provided in this embodiment may also be used in a model training phase, and optionally, the embedded vector obtaining unit 310 may also be configured to execute S210 and S230 in the foregoing embodiment, in this way, as shown in fig. 5, the data processing apparatus 300 further includes:
and a model training unit 350, configured to input the to-be-processed embedded vector, the fourth embedded vector, the fifth embedded vector, and the embedded vector of the candidate pushed content into a to-be-trained neural network model, and train the to-be-trained neural network model to obtain a target neural network model.
Optionally, the full-link layers include a first full-link layer, a second full-link layer, and a third full-link layer, which are sequentially arranged, wherein the number of neurons included in the first full-link layer is greater than the number of neurons included in the first full-link layer, and the number of neurons included in the second full-link layer is greater than the number of neurons included in the third full-link layer. And the model training unit 350 is further configured to activate based on the ReLU unit and converge based on the Batch norm and Drop out mode in the training process of each fully-connected layer.
The data processing device provided by the application can obtain a first embedded vector for representing historical push content browsed by a user, a second embedded vector for representing an application program used by the user and a third embedded vector for representing user attribute information, then the first embedded vector, the second embedded vector and the third embedded vector are spliced to obtain an embedded vector to be processed, the embedded vector to be processed and the embedded vector of alternative push content are input into a target neural network model, and the probability corresponding to the alternative push content output by the target neural network model is obtained. Therefore, when the embedded vectors corresponding to the representation user interests are calculated, the embedded vectors of all the alternative pushed contents can be input into the target neural network model based on a limit multi-classification mode, the probability that all the alternative pushed contents correspond to the same user can be obtained, the diversity of the recalled alternative pushed contents is realized, and the accuracy of the recalled alternative pushed contents is improved.
It should be noted that the device embodiment and the method embodiment in the present application correspond to each other, and specific principles in the device embodiment may refer to the contents in the method embodiment, which is not described herein again.
An electronic device provided by the present application will be described with reference to fig. 6.
Referring to fig. 6, based on the image processing method and apparatus, another server 100 capable of executing the image processing method is further provided in the embodiment of the present application. The server 100 includes one or more processors 102 (only one shown), memory 104, and a network module 106 coupled to each other. The memory 104 stores programs that can execute the content of the foregoing embodiments, and the processor 102 can execute the programs stored in the memory 104.
Processor 102 may include one or more processing cores, among other things. The processor 102, using various interfaces and lines to connect various parts throughout the server 100, performs various functions of the server 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104, and calling data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, such as a phonebook, audio-video data, chat log data, and the like.
The network module 106 is configured to receive and transmit electromagnetic waves, and implement interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices, for example, an audio playing device. The network module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The network module 106 may communicate with various networks, such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. For example, the network module 106 may interact with a base station.
Referring to fig. 7, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 1100 has stored therein program code that can be called by a processor to perform the method described in the above-described method embodiments.
The computer-readable storage medium 1100 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 1100 includes a non-volatile computer-readable storage medium. The computer readable storage medium 1100 has storage space for program code 1110 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
According to the data processing method, the data processing device and the server, a first embedded vector for representing historical push content browsed by a user, a second embedded vector for representing an application program used by the user and a third embedded vector for representing user attribute information are obtained, the first embedded vector, the second embedded vector and the third embedded vector are spliced to obtain an embedded vector to be processed, the embedded vector to be processed and the embedded vector of alternative push content are input into a target neural network model, and the probability corresponding to the alternative push content output by the target neural network model is obtained. Therefore, when the embedded vectors corresponding to the representation user interests are calculated, the embedded vectors of all the alternative pushed contents can be input into the target neural network model based on a limit multi-classification mode, the probability that all the alternative pushed contents correspond to the same user can be obtained, the diversity of the recalled alternative pushed contents is realized, and the accuracy of the recalled alternative pushed contents is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.