CN108769198B - Method and device for pushing information - Google Patents

Method and device for pushing information Download PDF

Info

Publication number
CN108769198B
CN108769198B CN201810533014.8A CN201810533014A CN108769198B CN 108769198 B CN108769198 B CN 108769198B CN 201810533014 A CN201810533014 A CN 201810533014A CN 108769198 B CN108769198 B CN 108769198B
Authority
CN
China
Prior art keywords
information
user
time
pushing
pushed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810533014.8A
Other languages
Chinese (zh)
Other versions
CN108769198A (en
Inventor
李晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN201810533014.8A priority Critical patent/CN108769198B/en
Publication of CN108769198A publication Critical patent/CN108769198A/en
Application granted granted Critical
Publication of CN108769198B publication Critical patent/CN108769198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The embodiment of the application discloses a method and a device for pushing information. One embodiment of the method comprises: receiving user motion data sent by a user side; inputting the motion data into a pre-trained user portrait prediction model to obtain user portrait information of the user, wherein the user portrait prediction model is used for representing a corresponding relation between the motion data and the user portrait information; selecting information to be pushed from a preset information set to be pushed based on the user portrait information of the user; and pushing the selected information to be pushed to the user side. The embodiment realizes targeted information push.

Description

Method and device for pushing information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for pushing information.
Background
Information push, also called "network broadcast", is a technology for reducing information overload by pushing information required by users on the internet through a certain technical standard or protocol. The information push technology can reduce the time spent by the user in searching on the network by actively pushing information to the user.
Disclosure of Invention
The embodiment of the application provides a method and a device for pushing information.
In a first aspect, an embodiment of the present application provides a method for pushing information, including: receiving user motion data sent by a user side; inputting the motion data into a pre-trained user portrait prediction model to obtain user portrait information of a user, wherein the user portrait prediction model is used for representing a corresponding relation between the motion data and the user portrait information; selecting information to be pushed from a preset information set to be pushed based on user portrait information of a user; and pushing the selected information to be pushed to the user side.
In some embodiments, the user representation prediction model is trained by: acquiring a first training sample data set, wherein the first training sample data comprises motion data of a sample user and user portrait information of the sample user; the user portrait prediction model is obtained by training using a machine learning method, with the motion data of the sample user included in the first training sample data of the first training sample data set as input, and with the user portrait information corresponding to the input motion data of the sample user as output.
In some embodiments, pushing the selected information to be pushed to the user side includes: inputting the motion data of a user into a pre-trained time prediction model to obtain time information for determining the pushing time for pushing information to a user side, wherein the time prediction model is used for representing the corresponding relation between the motion data and the time information for determining the pushing time for pushing information to the user side from which the motion data originates; determining the pushing time for pushing the information to the user side based on the obtained time information and the current time; and in response to the fact that the current time is determined to accord with the determined pushing time, pushing the selected information to be pushed to the user side.
In some embodiments, the temporal prediction model is trained by: acquiring a second training sample data set, wherein the second training sample data comprises time sample information and historical motion data, and the historical motion data is motion data of a sample user in a target time period; the time prediction model is obtained by training using a machine learning method, with the historical motion data included in the second training sample data set as input, and the time sample information corresponding to the input historical motion data as output.
In some embodiments, the information to be pushed in the information set to be pushed corresponds to tag information; and selecting information to be pushed from a preset information set to be pushed based on user portrait information of a user, wherein the information to be pushed comprises: aiming at information to be pushed in a preset information set to be pushed, determining the matching degree between label information corresponding to the information to be pushed and user portrait information of a user; and selecting at least one piece of information to be pushed from the information set to be pushed according to the sequence of the matching degrees from large to small.
In a second aspect, an embodiment of the present application provides an apparatus for pushing information, including: a receiving unit configured to receive the user's motion data transmitted by the user terminal; the input unit is configured to input the motion data into a pre-trained user portrait prediction model to obtain user portrait information of a user, wherein the user portrait prediction model is used for representing a corresponding relation between the motion data and the user portrait information; the selecting unit is configured to select information to be pushed from a preset information set to be pushed based on user portrait information of a user; and the pushing unit is configured to push the selected information to be pushed to the user side.
In some embodiments, the user representation prediction model is trained by: acquiring a first training sample data set, wherein the first training sample data comprises motion data of a sample user and user portrait information of the sample user; the user portrait prediction model is obtained by training using a machine learning method, with the motion data of the sample user included in the first training sample data of the first training sample data set as input, and with the user portrait information corresponding to the input motion data of the sample user as output.
In some embodiments, a push unit, comprising: the input module is configured to input the motion data of the user into a pre-trained time prediction model to obtain time information for determining the pushing time for pushing the information to the user side, wherein the time prediction model is used for representing the corresponding relation between the motion data and the time information for determining the pushing time for pushing the information to the user side from which the motion data originates; the determining module is configured to determine a pushing time for pushing information to the user side based on the obtained time information and the current time; and the pushing module is configured to respond to the determined pushing time when the current time accords with the determined pushing time, and push the selected information to be pushed to the user side.
In some embodiments, the temporal prediction model is trained by: acquiring a second training sample data set, wherein the second training sample data comprises time sample information and historical motion data, and the historical motion data is motion data of a sample user in a target time period; the time prediction model is obtained by training using a machine learning method, with the historical motion data included in the second training sample data set as input, and the time sample information corresponding to the input historical motion data as output.
In some embodiments, the information to be pushed in the information set to be pushed corresponds to tag information; and a selecting unit including: the determining module is configured to determine, for information to be pushed in a preset information set to be pushed, a matching degree between tag information corresponding to the information to be pushed and user portrait information of a user; the selecting module is configured to select at least one piece of information to be pushed from the information set to be pushed according to the sequence of the matching degrees from large to small.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; storage means for storing the one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the embodiments of the method for pushing information.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method as in any one of the embodiments of the method for pushing information.
According to the method and the device for pushing the information, the user portrait information of the user is obtained by inputting the motion data of the user received from the user side into the pre-trained user portrait prediction model. And then selecting information to be pushed from a preset information set to be pushed based on the user portrait information of the user. And finally, pushing the selected information to be pushed to the user side. Therefore, the motion data of the user is effectively utilized, the determined portrait information of the user is more accurate, and the targeted information pushing is realized.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for pushing information, according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for pushing information according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for pushing information according to the present application;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for pushing information according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the present method for pushing information or apparatus for pushing information may be applied.
As shown in fig. 1, the system architecture 100 may include clients 1011, 1012, 1013, a network 102, and a server 103. The network 102 is used to provide a medium for communication links between the clients 1011, 1012, 1013 and the server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may interact with the server 103 via the network 102 using user terminals 1011, 1012, 1013 to send or receive messages or the like. The user terminals 1011, 1012, 1013 may have various communication client applications installed thereon, such as an exercise management application, a web browser application, a shopping application, a search application, social platform software, and the like.
The clients 1011, 1012, 1013 may be hardware or software. When the user terminals 1011, 1012, 1013 are hardware, they may be various electronic devices having a display screen and supporting motion data collection, including but not limited to smart phones, smart watches, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, motion video Experts Group Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, motion video Experts Group Audio Layer 4), laptop computers, and so on. When the clients 1011, 1012, 1013 are software, the software can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 103 may be a server providing various services, such as a background server providing support for the information to be pushed presented on the user terminals 1011, 1012, 1013. The background server may analyze and perform other processing on the received motion data of the user, and feed back a processing result (e.g., the selected information to be pushed) to the user side. For example, the background server may input the motion data of the user received from the user terminal into a pre-trained user portrait prediction model to obtain the user portrait information of the user; then, based on the user portrait information of the user, selecting information to be pushed from a preset information set to be pushed; and finally, the selected information to be pushed can be pushed to the user side.
It should be noted that the method for pushing information provided in the embodiment of the present application is generally performed by the server 103, and accordingly, the apparatus for pushing information is generally disposed in the server 103.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of clients, networks, and servers in fig. 1 is merely illustrative. There may be any number of clients, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for pushing information in accordance with the present application is shown. The method for pushing the information comprises the following steps:
step 201, receiving the user's motion data sent by the user terminal.
In this embodiment, an execution subject (for example, a server shown in fig. 1) of the method for pushing information may receive the motion data of the user sent by the user terminal. The exercise data may be data related to the exercise behavior of the user, and may include, but is not limited to, at least one of: type of exercise, length of exercise, calories consumed, days of adherence, and trajectory of exercise. The type of motion may include, but is not limited to, at least one of: walking, running, swimming, riding and mountain climbing. The types of motion are different and the included motion data will also typically differ, e.g., for a "walking" motion, the motion data may also include the number of steps taken; for "swim" sports, the sports data may also include swimming types, e.g., breaststroke, butterfly stroke, freestyle stroke, etc.; for a "mountain-climbing" sport, the sport data may also include mountain-climbing height.
The motion data sent by the user side may be motion data of the user within a preset number of days (e.g., 1 day, 7 days, 30 days, etc.), and generally speaking, the longer the preset number of days is, the more abundant the motion data is, the more accurate the predicted user portrait information of the user is.
It should be noted that, the executing entity generally needs to first send a request for permission to obtain the exercise data to the user terminal, and the user can click an icon presented on the user terminal for representing permission to the executing entity to obtain the exercise data, so that the executing entity has permission to obtain the exercise data on the user terminal.
In some application scenarios, if the target application server has an acquisition right for the motion data on the user side, the execution main body may receive the motion data of the user side by using the target application server, that is, the target application server first acquires the motion data from the user side and then sends the acquired motion data to the execution main body. The target application server may be an application server of an application installed on the user side, for example, a server of a web browser application, a server of social platform software, or the like.
Step 202, inputting the motion data into a pre-trained user portrait prediction model to obtain user portrait information of the user.
In this embodiment, the execution agent may input the motion data of the user received in step 201 into a pre-trained user profile prediction model to obtain the user profile information of the user. The user profile prediction model may be used to characterize a correspondence between motion data and user profile information. For example, the user figure prediction model may be a correspondence table in which a plurality of correspondences between motion data and user figure information are stored, the correspondence table being prepared in advance by a technician based on statistics of a large amount of motion data and user figure information. The user portrait information may be tagged portrait information abstracted according to user demographic information, social relationships, preference habits, consumption behaviors, and the like, and the user portrait information may include at least one user tag, such as sports man, hobby yoga, high consumption level, and the like.
In some optional implementations of this embodiment, the user portrait prediction model may be obtained by training:
first, a first set of training sample data may be obtained, where the first training sample data may include motion data of a sample user and user profile information of the sample user. As an example, if the motion data of a sample user is "walk less than 500 steps per day," the user portrait information of the corresponding sample user may be "lack of motion"; if the motion data of the sample user is "continuous 7-day mountain climbing, and the mountain climbing height is 2000 meters every day", the user portrait information of the corresponding sample user may be "mountain climbing fan".
Then, a user image prediction model may be obtained by training using a machine learning method, with the motion data of the sample user included in the first training sample data of the first training sample data set as an input, and with the user image information corresponding to the input motion data of the sample user as an output. Specifically, a Naive Bayesian Model (NBM) or a Model for classification such as a Support Vector Machine (SVM) may be used, the motion data of the sample user included in the first training sample data may be used as the input of the Model, the user portrait information corresponding to the input motion data of the sample user may be output as the corresponding Model, and the Model may be trained by a Machine learning method to obtain the user portrait prediction Model.
The user image prediction model may be obtained by the execution subject through the training in the above step; after the other electronic device is trained in the above steps, the execution subject may acquire the trained user portrait prediction model from the other electronic device.
Step 203, selecting information to be pushed from a preset information set to be pushed based on the user portrait information of the user.
In this embodiment, the executing entity may select the information to be pushed from a preset information set to be pushed based on the user portrait information of the user obtained in step 202. The information to be pushed may include advertising information, e.g., feed advertising information, which is a form of advertising inserted between published messages.
In this embodiment, the execution subject may pre-determine information to be pushed that is associated with user image information that may be available in the user image prediction model, for example, the execution subject may pre-establish a correspondence table of correspondence between the user image information and information to be pushed that is to be pushed to a user end of a user indicated by the user image information. The correspondence table may further include a similarity (association degree) between the user image information and the corresponding information to be pushed. After obtaining the user portrait information of the user in step 202, the execution main body may use the correspondence table to find the information to be pushed corresponding to the user portrait information of the user in a preset information set to be pushed. If the user portrait information corresponds to at least two pieces of information to be pushed, the execution main body may select a preset number of pieces of information to be pushed in the information set to be pushed according to a descending order of similarity by using the correspondence table, or may select information to be pushed in the information set to be pushed, where the similarity is greater than a preset first similarity threshold.
It should be noted that the executing entity may determine at least one piece of information to be pushed corresponding to the user portrait information by: for the user portrait information in at least one piece of user portrait information that may be obtained in the user portrait prediction model, the execution subject may select, in the set of information to be pushed, information to be pushed that is associated with the user portrait information in advance by using a text similarity calculation algorithm (e.g., a binary method for calculating a sentence similarity, a cosine similarity (cosine similarity) algorithm, etc.), and may also generate a similarity between the user portrait information and each associated information to be pushed, where a value of the similarity may range from 0 to 1, and generally, a larger value of the similarity increases a similarity between information.
As an example, if the user portrait information is "lack of motion", the executing entity may use the correspondence table to find the information to be pushed corresponding to the "lack of motion" in the preset information set to be pushed as follows: body-building course advertisement. If the user portrait information is "fitness arrival", the execution main body may search, in the preset information set to be pushed, information to be pushed corresponding to the "fitness arrival" by using the correspondence table as follows: sports equipment advertisements.
In some optional implementation manners of this embodiment, the information to be pushed in the preset information set to be pushed may correspond to tag information, the tag information may be used to summarize key content of the information to be pushed, and the tag information may be manually preset. The execution main body can select information to be pushed from a preset information set to be pushed according to the following steps based on the user portrait information of the user: for the information to be pushed in the preset information set to be pushed, the execution main body can determine the similarity between the label information corresponding to the information to be pushed and the user portrait information of the user. The execution subject may perform the similarity calculation by using a known text similarity calculation method such as a cosine similarity calculation method or an edit distance algorithm. The execution main body can also perform word segmentation processing on the label information to obtain a label word set, and perform word segmentation processing on the user portrait information to obtain a portrait word set. Then, a semantic similarity calculation method (e.g., a method for calculating semantic similarity based on a tree hierarchy, a semantic similarity calculation method based on a Hownet) may be used to calculate the similarity between the tagged word set and the words with the same part of speech in the image word set, so as to obtain at least one similarity. Then, the sum of all the similarities in the at least one similarity can be determined as the similarity between the label information corresponding to the information to be pushed and the user portrait information of the user; the product of all the similarities in the at least one similarity can be determined as the similarity between the label information corresponding to the information to be pushed and the user portrait information of the user; for example, the weight of the similarity between words of the part of speech of the noun may be set higher than the weight of the similarity between words of the part of speech of the adjective, and the execution subject may determine the weighted sum of the similarities between words of each part of speech of the at least one similarity as the similarity between the tag information corresponding to the information to be pushed and the user portrait information of the user. And finally, selecting at least one piece of information to be pushed from the information set to be pushed according to the sequence of similarity from large to small. It should be noted that, the executing body may also select at least one to-be-pushed message with a similarity greater than a preset second similarity threshold from the to-be-pushed message set.
As an example, if the user portrait information is "swimmer", the execution main body determines that the similarity between the "swimmer" and the first to-be-pushed information of which the tag information is "summer swimming venue recommendation" is 0.9, the similarity between the "swimmer" and the second to-be-pushed information of which the tag information is "fast learning multiple swimming strokes" is 0.7, and the similarity between the "swimmer" and the third to-be-pushed information of which the tag information is "popular style swimsuit in the summerhans" is 0.8, the first to-be-pushed information of which the tag information is "summer swimming venue recommendation" may be pushed to the user terminal.
And step 204, pushing the selected information to be pushed to the user side.
In this embodiment, the executing entity may push the information to be pushed selected in step 203 to the user side.
In some application scenarios, if the motion data of the user is received through a target application server, the execution main body may push the selected information to be pushed to the user side by using the target application server, that is, the execution main body may push the selected information to be pushed to the target application server, and then the target application server may push the selected information to be pushed to the user side.
With continuing reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for pushing information according to the present embodiment. In the application scenario of fig. 3, the execution body 301 for pushing information may first receive the motion data 303 of the user sent by the user terminal 302, and the motion data 303 is "mountain climbing for 7 consecutive days, and each mountain climbing time exceeds two hours", as an example. The execution agent 301 for pushing information may then input the motion data 303 to a pre-trained user profile prediction model 304 to obtain the user profile information 305 for the user as a "mountain climber". Then, the information to be pushed 306 can be selected from a preset information set to be pushed based on the user portrait information "mountain climbing fan", for example, the selected information to be pushed 306 can be a mountain climbing notice "can use 10 to 20 minutes to do muscle stretching activities before mountain climbing, so that the muscles of the whole body are relaxed as much as possible. When the user starts to climb the mountain, the user can increase the amount of exercise as soon as the user can not exercise, and the user needs to gradually exercise. Usually, some simple warm-up exercises are performed first, and then the intensity is gradually increased according to a certain respiratory frequency, so that the respiratory frequency is prevented from changing suddenly during the exercises. Finally, the execution body 301 for pushing information may push the information to be pushed 306 to the user end 302.
The method provided by the embodiment of the application effectively utilizes the motion data of the user, so that the determined portrait information of the user is more accurate, and the targeted information push is realized.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for pushing information is shown. The flow 400 of the method for pushing information comprises the following steps:
step 401, receiving the user motion data sent by the user terminal.
Step 402, inputting the motion data into a pre-trained user portrait prediction model to obtain user portrait information of the user.
Step 403, selecting information to be pushed from a preset information set to be pushed based on the user portrait information of the user.
In the present embodiment, the operations of steps 401 and 403 are substantially the same as the operations of steps 201 and 203, and are not described herein again.
Step 404, inputting the motion data of the user into a pre-trained time prediction model to obtain time information for determining the pushing time for pushing the information to the user terminal.
In this embodiment, the executing entity may input the exercise data of the user received in step 401 into a pre-trained time prediction model, so as to obtain time information for determining a push time for pushing information to the user terminal. The time information may generally include a time difference (time length) between a push time of information push to the user terminal and a current time, and a time relation word "before" or "after". As an example, the time information may be 20 minutes later, which represents that information is pushed to the user terminal after 20 minutes of the current time. It should be noted that, since the push time is usually a certain time point after the current time, the time relation term included in the time information is usually "after", and in some cases, the time relation term in the time information may be omitted, and in this case, the time information may include only the time difference (time length) between the push time and the current time. The above-mentioned temporal prediction model may be used to characterize a correspondence between the motion data and temporal information used to determine a push time for pushing information to the user terminal from which the motion data originates. As an example, the time prediction model may be a correspondence table in which correspondence between a plurality of pieces of motion data and time information of push time for pushing information to the user terminal from which the motion data is obtained is stored, the correspondence table being prepared in advance by a technician based on statistics of a large amount of motion data and the time information of push time for pushing information to the user terminal from which the motion data is obtained.
In some optional implementations of the present embodiment, the temporal prediction model may be obtained by training through the following steps:
first, a second set of training sample data may be obtained, where the second training sample data may include temporal sample information and historical motion data. For the second training sample data in the second training sample data set, the historical motion data included in the second training sample data is the motion data of the sample user in the target time period. The target time period is usually a preset time period before a historical browsing time, the historical browsing time is a browsing time when a sample user browses pushed information, the pushed information is information pushed to a user terminal of the sample user in the historical time period, and the preset time period may be a time period formed from a preset movement starting time to a determined target time. The preset exercise starting time may be the exercise starting time of this time, and as an example, if the number of steps of the user is increased from zero by seven to thirty minutes in the morning, seven to thirty minutes may be used as the exercise starting time of this time. The preset motion start time may also be a motion start time manually set by the user, and as an example, the user may fill the current motion type and the motion start time in the motion management interface on the user side. The target time may be determined based on the historical browsing time and the time sample information, and in general, the target time may be a time that is before the historical browsing time and has a time difference with the historical browsing time that is a time length included in the time sample information, and for example, if the historical browsing time is 10 minutes and the time length included in the time sample information is 20 minutes, the target time may be determined to be 9 minutes, 40 minutes.
Then, a time prediction model may be obtained by training using a machine learning method with the historical motion data included in the second training sample data set as an input and the time sample information corresponding to the input historical motion data as an output. Specifically, a temporal prediction model may be obtained by using a model for classification such as a naive bayesian model or a support vector machine, inputting historical motion data included in the second training sample data as the model, outputting time sample information corresponding to the input historical motion data as the corresponding model, and training the model by a machine learning method.
It should be noted that, the time prediction model may be obtained by the execution subject through the training in the above steps; after the other electronic devices are trained through the above steps, the execution subject may acquire the trained time prediction model from the other electronic devices.
Step 405, determining a pushing time for pushing information to the user side based on the obtained time information and the current time.
In this embodiment, the execution body may determine, based on the obtained time information and the current time, a pushing time for pushing information to the user side. Specifically, the execution subject may first determine whether the time relation word in the time information is "before" or "after", and in general, the time relation word is "after"; then, if it is determined that the time relation word is "after", the execution main body may increase the time length in the time information on the basis of the current time to obtain the push time for pushing the information to the user side. For example, if the time information is "after 20 minutes" and the current time is 4: 50 minutes, the execution subject may determine that the pushing time for pushing the information to the user terminal is 5: 10 minutes.
And step 406, in response to determining that the current time meets the determined pushing time, pushing the selected information to be pushed to the user side.
In this embodiment, after the executing entity determines the pushing time in step 405, it may determine whether the current time matches the determined pushing time, and if it is determined that the current time matches the determined pushing time, the executing entity may push the information to be pushed selected in step 403 to the user end. As an example, if the determined pushing time is 5 o 'clock and 10 minutes, and if the current time is 5 o' clock and 10 minutes, the selected information to be pushed may be pushed to the user side.
In some application scenarios, if the motion data of the user is received through a target application server, the execution main body may push the selected information to be pushed to the user side by using the target application server when the current time matches the determined pushing time, that is, the execution main body may push the selected information to be pushed and the determined pushing time to the target application server, and then the target application server may push the selected information to be pushed to the user side when the current time matches the determined pushing time.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the process 400 of the method for pushing information in this embodiment adds a step 404 of determining time information of a pushing time for pushing information to the user terminal, a step 405 of determining a pushing time for pushing information to the user terminal, and a step 406 of pushing selected information to be pushed to the user terminal in response to determining that the current time matches the determined pushing time. Therefore, the scheme described in this embodiment can effectively utilize the motion data of the user to determine the pushing time for pushing the information to the user, thereby improving the click rate of the user on the pushed information.
With further reference to fig. 5, as an implementation of the method shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for pushing information, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices.
As shown in fig. 5, the apparatus 500 for pushing information of the present embodiment includes: a receiving unit 501, an input unit 502, a selecting unit 503 and a pushing unit 504. Wherein, the receiving unit 501 is configured to receive the motion data of the user sent by the user terminal; the input unit 502 is configured to input motion data into a pre-trained user portrait prediction model, resulting in user portrait information of a user, wherein the user portrait prediction model is used to represent a correspondence between the motion data and the user portrait information; the selecting unit 503 is configured to select information to be pushed from a preset information set to be pushed based on user portrait information of a user; the pushing unit 504 is configured to push the selected information to be pushed to the user side.
In this embodiment, the specific processes of the receiving unit 501, the input unit 502, the selecting unit 503 and the pushing unit 504 of the apparatus 500 for pushing information may refer to step 201, step 202, step 203 and step 204 in the corresponding embodiment of fig. 2.
In some optional implementations of this embodiment, the user portrait prediction model may be obtained by training:
first, a first set of training sample data may be obtained, where the first training sample data may include motion data of a sample user and user profile information of the sample user. As an example, if the motion data of a sample user is "walk less than 500 steps per day," the user portrait information of the corresponding sample user may be "lack of motion"; if the motion data of the sample user is "continuous 7-day mountain climbing, and the mountain climbing height is 2000 meters every day", the user portrait information of the corresponding sample user may be "mountain climbing fan".
Then, a user image prediction model may be obtained by training using a machine learning method, with the motion data of the sample user included in the first training sample data of the first training sample data set as an input, and with the user image information corresponding to the input motion data of the sample user as an output. Specifically, a Naive Bayesian Model (NBM) or a Model for classification such as a Support Vector Machine (SVM) may be used, the motion data of the sample user included in the first training sample data may be used as the input of the Model, the user portrait information corresponding to the input motion data of the sample user may be output as the corresponding Model, and the Model may be trained by a Machine learning method to obtain the user portrait prediction Model.
In some optional implementations of the present embodiment, the pushing unit 504 may include an input module (not shown in the figure), a determination module (not shown in the figure), and a pushing module (not shown in the figure). The input module may input the motion data of the user into a pre-trained time prediction model to obtain time information for determining a push time for pushing information to the user side. The time information may generally include a time difference (time length) between a push time of information push to the user terminal and a current time, and a time relation word "before" or "after". As an example, the time information may be 20 minutes later, which represents that information is pushed to the user terminal after 20 minutes of the current time. It should be noted that, since the push time is usually a certain time point after the current time, the time relation term included in the time information is usually "after", and in some cases, the time relation term in the time information may be omitted, and in this case, the time information may include only the time difference (time length) between the push time and the current time. The above-mentioned temporal prediction model may be used to characterize a correspondence between the motion data and temporal information used to determine a push time for pushing information to the user terminal from which the motion data originates. As an example, the time prediction model may be a correspondence table in which correspondence between a plurality of pieces of motion data and time information of push time for pushing information to the user terminal from which the motion data is obtained is stored, the correspondence table being prepared in advance by a technician based on statistics of a large amount of motion data and the time information of push time for pushing information to the user terminal from which the motion data is obtained. The determining module may determine, based on the obtained time information and the current time, a pushing time for pushing information to the user side. Specifically, the determining module may first determine whether the time relation word in the time information is "before" or "after", and in general, the time relation word is "after"; then, if it is determined that the time relation word is "after", the determining module may increase the time length in the time information based on the current time to obtain the pushing time for pushing the information to the user side. For example, if the time information is "after 20 minutes", and the current time is 4: 50 minutes, the determining module may determine that the pushing time for pushing the information to the user terminal is 5: 10 minutes. The pushing module may determine whether the current time matches the determined pushing time, and if it is determined that the current time matches the determined pushing time, may push the information to be pushed, which is selected in step 403, to the user side. As an example, if the determined pushing time is 5 o 'clock and 10 minutes, and if the current time is 5 o' clock and 10 minutes, the selected information to be pushed may be pushed to the user side.
In some optional implementations of the present embodiment, the temporal prediction model may be obtained by training through the following steps:
first, a second set of training sample data may be obtained, where the second training sample data may include temporal sample information and historical motion data. For the second training sample data in the second training sample data set, the historical motion data included in the second training sample data is the motion data of the sample user in the target time period. The target time period is usually a preset time period before a historical browsing time, the historical browsing time is a browsing time when a sample user browses pushed information, the pushed information is information pushed to a user terminal of the sample user in the historical time period, and the preset time period may be a time period formed from a preset movement starting time to a determined target time. The preset exercise starting time may be the exercise starting time of this time, and as an example, if the number of steps of the user is increased from zero by seven to thirty minutes in the morning, seven to thirty minutes may be used as the exercise starting time of this time. The preset motion start time may also be a motion start time manually set by the user, and as an example, the user may fill the current motion type and the motion start time in the motion management interface on the user side. The target time may be determined based on the historical browsing time and the time sample information, and in general, the target time may be a time that is before the historical browsing time and has a time difference with the historical browsing time that is a time length included in the time sample information, and for example, if the historical browsing time is 10 minutes and the time length included in the time sample information is 20 minutes, the target time may be determined to be 9 minutes, 40 minutes.
Then, a time prediction model may be obtained by training using a machine learning method with the historical motion data included in the second training sample data set as an input and the time sample information corresponding to the input historical motion data as an output. Specifically, a temporal prediction model may be obtained by using a model for classification such as a naive bayesian model or a support vector machine, inputting historical motion data included in the second training sample data as the model, outputting time sample information corresponding to the input historical motion data as the corresponding model, and training the model by a machine learning method.
In some optional implementation manners of this embodiment, the information to be pushed in the preset information set to be pushed may correspond to tag information, the tag information may be used to summarize key content of the information to be pushed, and the tag information may be manually preset. The selecting unit 503 may include a determining module (not shown) and a selecting module (not shown). The selecting unit 503 may select information to be pushed from a preset information set to be pushed based on the user portrait information of the user according to the following steps: for the information to be pushed in the preset information set to be pushed, the determining module may determine a similarity between the tag information corresponding to the information to be pushed and the user portrait information of the user. The determining module may perform similarity calculation by using a known text similarity calculation method such as a cosine similarity calculation method and an edit distance algorithm. The determining module can also perform word segmentation on the label information to obtain a label word set, and perform word segmentation on the user portrait information to obtain a portrait word set. Then, a semantic similarity calculation method (e.g., a method for calculating semantic similarity based on a tree hierarchy, a semantic similarity calculation method based on a knowledge network) may be used to calculate the similarity between the tagged word set and the words with the same part of speech in the image word set, so as to obtain at least one similarity. Then, the sum of all the similarities in the at least one similarity can be determined as the similarity between the label information corresponding to the information to be pushed and the user portrait information of the user; the product of all the similarities in the at least one similarity can be determined as the similarity between the label information corresponding to the information to be pushed and the user portrait information of the user; for example, the weight of the similarity between the words of the part of speech of the noun may be set to be higher than the weight of the similarity between the words of the part of speech of the adjective word, and the determination module may determine the weighted sum of the similarities between the words of each part of speech of the at least one similarity as the similarity between the tag information corresponding to the information to be pushed and the user portrait information of the user. Finally, the selecting module may select at least one piece of information to be pushed from the set of information to be pushed in an order from a large similarity to a small similarity. It should be noted that, the selecting module may also select at least one to-be-pushed message with a similarity greater than a preset second similarity threshold from the to-be-pushed message set.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use with an electronic device (e.g., server 103 of FIG. 1) implementing an embodiment of the invention is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprises a receiving unit, an input unit, a selecting unit and a pushing unit. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves. For example, the receiving unit may also be described as a "unit that receives the user's motion data transmitted by the user side".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: receiving user motion data sent by a user side; inputting the motion data into a pre-trained user portrait prediction model to obtain user portrait information of a user, wherein the user portrait prediction model is used for representing a corresponding relation between the motion data and the user portrait information; selecting information to be pushed from a preset information set to be pushed based on user portrait information of a user; and pushing the selected information to be pushed to the user side.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention according to the present invention is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the scope of the invention as defined by the appended claims. For example, the above features and (but not limited to) features having similar functions disclosed in the present invention are mutually replaced to form the technical solution.

Claims (12)

1. A method for pushing information, comprising:
receiving user motion data sent by a user side;
inputting the motion data into a pre-trained user portrait prediction model to obtain user portrait information of the user, wherein the user portrait prediction model is used for representing a corresponding relation between the motion data and the user portrait information;
selecting information to be pushed from a preset information set to be pushed based on the user portrait information of the user;
and determining time information by utilizing the motion data of the user, determining pushing time for pushing information to the user side based on the time information and the current time, and pushing the selected information to be pushed to the user side at the pushing time.
2. The method of claim 1, wherein the user representation prediction model is trained by:
acquiring a first training sample data set, wherein the first training sample data comprises motion data of a sample user and user portrait information of the sample user;
and training by using a machine learning method to obtain a user image prediction model by using the motion data of the sample user included in the first training sample data set as input and using the user image information corresponding to the input motion data of the sample user as output.
3. The method of claim 1, wherein the determining time information by using the motion data of the user, determining a pushing time for pushing information to the user terminal based on the time information and a current time, and pushing the selected information to be pushed to the user terminal at the pushing time comprises:
inputting the motion data of the user into a pre-trained time prediction model to obtain time information for determining the pushing time for pushing information to the user side, wherein the time prediction model is used for representing the corresponding relation between the motion data and the time information for determining the pushing time for pushing information to the user side from which the motion data originates;
determining pushing time for pushing information to the user side based on the obtained time information and the current time;
and in response to the fact that the current time is determined to accord with the determined pushing time, pushing the selected information to be pushed to the user side.
4. The method of claim 3, wherein the temporal prediction model is trained by:
acquiring a second training sample data set, wherein the second training sample data comprises time sample information and historical motion data, and the historical motion data is motion data of a sample user in a target time period;
and training by using a machine learning method to obtain a time prediction model by taking historical motion data included in second training sample data in the second training sample data set as input and taking time sample information corresponding to the input historical motion data as output.
5. The method according to one of claims 1 to 4, wherein the information to be pushed in the information set to be pushed corresponds to tag information; and
based on the user portrait information of the user, selecting information to be pushed from a preset information set to be pushed, including:
aiming at information to be pushed in a preset information set to be pushed, determining the matching degree between label information corresponding to the information to be pushed and user portrait information of the user;
and selecting at least one piece of information to be pushed from the information set to be pushed according to the sequence of the matching degrees from large to small.
6. An apparatus for pushing information, comprising:
a receiving unit configured to receive the user's motion data transmitted by the user terminal;
an input unit configured to input the motion data into a pre-trained user portrait prediction model, to obtain user portrait information of the user, wherein the user portrait prediction model is used to represent a corresponding relationship between the motion data and the user portrait information;
the selecting unit is configured to select information to be pushed from a preset information set to be pushed based on user portrait information of the user;
the pushing unit is configured to determine time information by using the motion data of the user, determine pushing time for pushing information to the user side based on the time information and the current time, and push the selected information to be pushed to the user side at the pushing time.
7. The apparatus of claim 6, wherein the user representation prediction model is trained by:
acquiring a first training sample data set, wherein the first training sample data comprises motion data of a sample user and user portrait information of the sample user;
and training by using a machine learning method to obtain a user image prediction model by using the motion data of the sample user included in the first training sample data set as input and using the user image information corresponding to the input motion data of the sample user as output.
8. The apparatus of claim 6, wherein the pushing unit comprises:
the input module is configured to input the motion data of the user into a pre-trained time prediction model to obtain time information for determining pushing time for pushing information to the user side, wherein the time prediction model is used for representing a corresponding relation between the motion data and the time information for determining the pushing time for pushing information to the user side from which the motion data originates;
the determining module is configured to determine a pushing time for pushing information to the user side based on the obtained time information and the current time;
and the pushing module is configured to respond to the determined pushing time when the current time accords with the determined pushing time, and push the selected information to be pushed to the user side.
9. The apparatus of claim 8, wherein the temporal prediction model is trained by:
acquiring a second training sample data set, wherein the second training sample data comprises time sample information and historical motion data, and the historical motion data is motion data of a sample user in a target time period;
and training by using a machine learning method to obtain a time prediction model by taking historical motion data included in second training sample data in the second training sample data set as input and taking time sample information corresponding to the input historical motion data as output.
10. The apparatus according to one of claims 6 to 9, wherein the information to be pushed in the information set to be pushed corresponds to tag information; and
the selecting unit comprises:
the determining module is configured to determine, for information to be pushed in a preset information set to be pushed, a matching degree between tag information corresponding to the information to be pushed and user portrait information of the user;
and the selecting module is configured to select at least one piece of information to be pushed from the information set to be pushed according to the sequence of the matching degrees from large to small.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201810533014.8A 2018-05-29 2018-05-29 Method and device for pushing information Active CN108769198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810533014.8A CN108769198B (en) 2018-05-29 2018-05-29 Method and device for pushing information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810533014.8A CN108769198B (en) 2018-05-29 2018-05-29 Method and device for pushing information

Publications (2)

Publication Number Publication Date
CN108769198A CN108769198A (en) 2018-11-06
CN108769198B true CN108769198B (en) 2021-11-12

Family

ID=64003807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810533014.8A Active CN108769198B (en) 2018-05-29 2018-05-29 Method and device for pushing information

Country Status (1)

Country Link
CN (1) CN108769198B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109714277A (en) * 2018-12-28 2019-05-03 上海掌门科技有限公司 Information flow calling, distribution method, electronic equipment and medium
CN109816494B (en) * 2019-01-31 2021-05-28 北京卡路里信息技术有限公司 Course recommendation method, device, equipment and storage medium
CN110134872A (en) * 2019-05-27 2019-08-16 北京字节跳动网络技术有限公司 Determination method, apparatus, server and the readable medium on information notice opportunity
CN112347375B (en) * 2019-08-07 2023-09-26 腾讯科技(深圳)有限公司 Method and device for pushing exercise document and storage medium
CN111859102A (en) * 2020-02-17 2020-10-30 北京嘀嘀无限科技发展有限公司 Prompt information determination method, system, medium and storage medium
CN112309390A (en) * 2020-03-05 2021-02-02 北京字节跳动网络技术有限公司 Information interaction method and device
CN111460319A (en) * 2020-03-31 2020-07-28 深圳前海微众银行股份有限公司 Message playing optimization method, device, equipment and readable storage medium
CN111556155B (en) * 2020-04-29 2023-01-24 中国银行股份有限公司 Information pushing method and device
CN111783873B (en) * 2020-06-30 2023-08-25 中国工商银行股份有限公司 User portrait method and device based on increment naive Bayes model
CN111988407B (en) * 2020-08-20 2023-11-07 腾讯科技(深圳)有限公司 Content pushing method and related device
CN112465565B (en) * 2020-12-11 2023-09-26 加和(北京)信息科技有限公司 User portrait prediction method and device based on machine learning
CN112685641B (en) * 2020-12-31 2023-04-07 五八有限公司 Information processing method and device
CN112954066A (en) * 2021-02-26 2021-06-11 北京三快在线科技有限公司 Information pushing method and device, electronic equipment and readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102889889A (en) * 2011-07-18 2013-01-23 神达电脑股份有限公司 Method for monitoring fitness state of user of personal navigation device and related device
CN103500266A (en) * 2013-09-05 2014-01-08 北京航空航天大学 Method and device for pushing application information based on health recognition
CN105049526A (en) * 2015-08-19 2015-11-11 网易(杭州)网络有限公司 Push method, device and system of game gift bag
CN105404658A (en) * 2015-11-04 2016-03-16 中国联合网络通信集团有限公司 Homomorphic friend-making relationship establishment method and system and mobile terminal
CN105678064A (en) * 2015-12-31 2016-06-15 小米科技有限责任公司 Fitness scheme recommending method and device
CN105740331A (en) * 2016-01-22 2016-07-06 百度在线网络技术(北京)有限公司 Information push method and device
CN105808959A (en) * 2016-03-16 2016-07-27 北京永数网络科技有限公司 Motion detection system, motion detection terminal and cloud platform
CN106407425A (en) * 2016-09-27 2017-02-15 北京百度网讯科技有限公司 A method and a device for information push based on artificial intelligence
CN106803190A (en) * 2017-01-03 2017-06-06 北京掌阔移动传媒科技有限公司 A kind of ad personalization supplying system and method
WO2017167121A1 (en) * 2016-03-31 2017-10-05 阿里巴巴集团控股有限公司 Method and device for determining and applying association relationship between application programs

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102681999A (en) * 2011-03-08 2012-09-19 阿里巴巴集团控股有限公司 Method and device for collecting and sending user action information

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102889889A (en) * 2011-07-18 2013-01-23 神达电脑股份有限公司 Method for monitoring fitness state of user of personal navigation device and related device
CN103500266A (en) * 2013-09-05 2014-01-08 北京航空航天大学 Method and device for pushing application information based on health recognition
CN105049526A (en) * 2015-08-19 2015-11-11 网易(杭州)网络有限公司 Push method, device and system of game gift bag
CN105404658A (en) * 2015-11-04 2016-03-16 中国联合网络通信集团有限公司 Homomorphic friend-making relationship establishment method and system and mobile terminal
CN105678064A (en) * 2015-12-31 2016-06-15 小米科技有限责任公司 Fitness scheme recommending method and device
CN105740331A (en) * 2016-01-22 2016-07-06 百度在线网络技术(北京)有限公司 Information push method and device
CN105808959A (en) * 2016-03-16 2016-07-27 北京永数网络科技有限公司 Motion detection system, motion detection terminal and cloud platform
WO2017167121A1 (en) * 2016-03-31 2017-10-05 阿里巴巴集团控股有限公司 Method and device for determining and applying association relationship between application programs
CN106407425A (en) * 2016-09-27 2017-02-15 北京百度网讯科技有限公司 A method and a device for information push based on artificial intelligence
CN106803190A (en) * 2017-01-03 2017-06-06 北京掌阔移动传媒科技有限公司 A kind of ad personalization supplying system and method

Also Published As

Publication number Publication date
CN108769198A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108769198B (en) Method and device for pushing information
CN109508584B (en) Video classification method, information processing method and server
CN110582025B (en) Method and apparatus for processing video
CN107526725B (en) Method and device for generating text based on artificial intelligence
US10747771B2 (en) Method and apparatus for determining hot event
US10885344B2 (en) Method and apparatus for generating video
US10949000B2 (en) Sticker recommendation method and apparatus
CN107273508B (en) Information processing method and device based on artificial intelligence
CN112307344B (en) Object recommendation model, object recommendation method and device and electronic equipment
CN108491540B (en) Text information pushing method and device and intelligent terminal
CN112883731B (en) Content classification method and device
CN110782286B (en) Advertisement pushing method, advertisement pushing device, server and computer readable storage medium
CN113377971A (en) Multimedia resource generation method and device, electronic equipment and storage medium
CN109493138B (en) Information recommendation method and device, server and storage medium
CN111651679B (en) Recommendation method and device based on reinforcement learning
CN111144936B (en) Similar crowd expansion method and device based on user labels
CN113704509B (en) Multimedia recommendation method and device, electronic equipment and storage medium
CN113015010B (en) Push parameter determination method, device, equipment and computer readable storage medium
CN111859973A (en) Method and device for generating commentary
CN111667018B (en) Object clustering method and device, computer readable medium and electronic equipment
CN112989174A (en) Information recommendation method and device, medium and equipment
CN110020035B (en) Data identification method and device, storage medium and electronic device
CN106844504B (en) A kind of method and apparatus for sending song and singly identifying
CN116108810A (en) Text data enhancement method and device
CN112672202B (en) Bullet screen processing method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant