CN108509611B - Method and device for pushing information - Google Patents

Method and device for pushing information Download PDF

Info

Publication number
CN108509611B
CN108509611B CN201810295444.0A CN201810295444A CN108509611B CN 108509611 B CN108509611 B CN 108509611B CN 201810295444 A CN201810295444 A CN 201810295444A CN 108509611 B CN108509611 B CN 108509611B
Authority
CN
China
Prior art keywords
information
target video
person
candidate information
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810295444.0A
Other languages
Chinese (zh)
Other versions
CN108509611A (en
Inventor
李元朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN201810295444.0A priority Critical patent/CN108509611B/en
Publication of CN108509611A publication Critical patent/CN108509611A/en
Application granted granted Critical
Publication of CN108509611B publication Critical patent/CN108509611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

The embodiment of the application discloses a method and a device for pushing information. One embodiment of the method comprises: tracking and detecting a face image in a target video; determining a first person identity label associated with the face image in the target video based on a face information set and the face image in the target video, wherein the face information set comprises face information associated with the person identity label; selecting candidate information corresponding to the first person identity as first candidate information from a candidate information set according to a pre-established corresponding relation between the candidate information and the person identity; and pushing the first candidate information. The embodiment realizes targeted information push.

Description

Method and device for pushing information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for pushing information.
Background
Information push, also called "network broadcast", is a technology for reducing information overload by pushing information required by users on the internet through a certain technical standard or protocol. The information push technology can reduce the time spent by the user in searching on the network by actively pushing information to the user.
Disclosure of Invention
The embodiment of the application provides a method and a device for pushing information.
In a first aspect, an embodiment of the present application provides a method for pushing information, where the method includes: tracking and detecting a face image in a target video; determining a first person identity label associated with the face image in the target video based on a face information set and the face image in the target video, wherein the face information set comprises face information associated with the person identity label; selecting candidate information corresponding to the first person identity as first candidate information from a candidate information set according to a pre-established corresponding relation between the candidate information and the person identity; and pushing the first candidate information.
In some embodiments, the determining, based on the face information set and the face image in the target video, the first person identifier associated with the face image in the target video includes: in response to detecting that the face images in the target video indicate at least two people, assigning person identifications to the face images indicating the same people; determining a preset number of target person identifications according to the number of the face images corresponding to the person identifications, wherein the number of the face images corresponding to each person identification is as follows: the number of face images in the target video assigned to the character identifier; and for each target person identification, determining a first person identification associated with the face image distributed to the target person identification according to the face information set and the face image distributed to the target person identification.
In some embodiments, the above method further comprises: in response to detecting a pause playing operation aiming at the target video, determining a target video frame displayed when the target video is paused; and determining a second person identity mark associated with the face image in the target video frame based on the face information set.
In some embodiments, the above method further comprises: according to the corresponding relation, selecting candidate information corresponding to the second person identity mark from the candidate information set as second candidate information; and pushing the second candidate information.
In some embodiments, the above method further comprises: carrying out optical character recognition on the target video frame, and determining character information in the target video frame; selecting candidate information from the candidate information set as third candidate information according to the character information; and pushing the third candidate information.
In some embodiments, the performing optical character recognition on the target video frame to determine text information in the target video frame includes: in response to detecting the target video frame comprises at least one of: and the bullet screen and the subtitle determine at least one of the following items from the target video frame: bullet screen text information and subtitle text information.
In some embodiments, the selecting a third candidate message from the candidate message set according to the text message includes: extracting key words in the bullet screen text information and/or the subtitle text information; and selecting candidate information corresponding to the extracted keyword from the candidate information set as third candidate information according to the corresponding relation between the pre-established keyword and the candidate information.
In some embodiments, the above method further comprises: and establishing incidence relation information according to a first person identity mark associated with the face image in the target video, wherein the incidence relation information is used for indicating the incidence relation between the first person identity marks.
In some embodiments, the above method further comprises: and adding the incidence relation information to a pre-established incidence relation information set, wherein the incidence relation information in the incidence relation information set is used for indicating the incidence relation between at least two character identification marks.
In some embodiments, the above method further comprises: determining a fourth person identification having an association relation with at least one of the following items from the association relation information set: a first person identification, the second person identification, and the third person identification; according to the corresponding relation, selecting candidate information corresponding to the fourth person identification mark from the candidate information set as fourth candidate information; and pushing the fourth candidate information.
In a second aspect, an embodiment of the present application provides an apparatus for pushing information, where the apparatus includes: the detection unit is configured to track and detect a face image in a target video; a first determining unit, configured to determine a first person identity associated with a face image in the target video based on a face information set and the face image in the target video, where the face information set includes face information associated with the person identity; the first selection unit is configured to select candidate information corresponding to the first person identity as first candidate information from a candidate information set according to a pre-established correspondence between the candidate information and the person identity; and the first pushing unit is configured to push the first candidate information.
In some embodiments, the first determining unit is further configured to: in response to detecting that the face images in the target video indicate at least two people, assigning person identifications to the face images indicating the same people; determining a preset number of target person identifications according to the number of the face images corresponding to the person identifications, wherein the number of the face images corresponding to each person identification is as follows: the number of face images in the target video assigned to the character identifier; and for each target person identification, determining a first person identification associated with the face image distributed to the target person identification according to the face information set and the face image distributed to the target person identification.
In some embodiments, the above apparatus further comprises: a second determination unit configured to determine a target video frame displayed when the target video is paused, in response to detection of a pause play operation for the target video; and the third determining unit is configured to determine a second person identity associated with the face image in the target video frame based on the face information set.
In some embodiments, the above apparatus further comprises: a second selecting unit, configured to select, from the candidate information set, candidate information corresponding to the second person id as second candidate information according to the correspondence; and the second pushing unit is configured to push the second candidate information.
In some embodiments, the above apparatus further comprises: a fourth determining unit, configured to perform optical character recognition on the target video frame, and determine text information in the target video frame; a third selecting unit, configured to select candidate information from the candidate information set according to the text information, as third candidate information; and the third pushing unit is configured to push the third candidate information.
In some embodiments, the fourth determining unit is further configured to: in response to detecting the target video frame comprises at least one of: and the bullet screen and the subtitle determine at least one of the following items from the target video frame: bullet screen text information and subtitle text information.
In some embodiments, the third selecting unit is further configured to: extracting key words in the bullet screen text information and/or the subtitle text information; and selecting candidate information corresponding to the extracted keyword from the candidate information set as third candidate information according to the corresponding relation between the pre-established keyword and the candidate information.
In some embodiments, the above apparatus further comprises: and the establishing unit is configured to establish association relationship information according to a first person identification associated with the face image in the target video, wherein the association relationship information is used for indicating an association relationship between the first person identification.
In some embodiments, the above apparatus further comprises: and the adding unit is configured to add the association relationship information to a pre-established association relationship information set, wherein the association relationship information in the association relationship information set is used for indicating the association relationship between at least two character identifiers.
In some embodiments, the above apparatus further comprises: a fifth determining unit, configured to determine, from the association relationship information set, a fourth person identifier having an association relationship with at least one of the following: a first person identification, the second person identification, and the third person identification; a fourth selecting unit, configured to select, from the candidate information set according to the correspondence, candidate information corresponding to the fourth person id as fourth candidate information; and the fourth pushing unit is configured to push the fourth candidate information.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method and the device for pushing the information, the face image in the target video is detected by tracking; then, determining a first person identity label associated with the face image in the target video based on a face information set and the face image in the target video, wherein the face information set comprises face information associated with the person identity label; then, according to the pre-established corresponding relation between the candidate information and the person identification, selecting the candidate information corresponding to the first person identification from the candidate information set as first candidate information; finally, the first candidate information is pushed, so that the information push rich in pertinence is realized.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for pushing information, according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for pushing information according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for pushing information according to the present application;
FIG. 5A is a flow diagram of yet another embodiment of a method for pushing information according to the present application;
FIG. 5B is a flow chart according to an alternative implementation of step 508 in FIG. 5A of the present application;
FIG. 6 is a schematic block diagram illustrating one embodiment of an apparatus for pushing information according to the present application;
fig. 7 is a schematic structural diagram of a computer system suitable for implementing the terminal device or the server according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the present method for pushing information or apparatus for pushing information may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a video playing application, a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like, may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting video playing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for video playback type applications on the terminal devices 101, 102, 103. The background server may analyze and perform other processing on the received data such as the video identifier, and push the processing result (e.g., candidate information) to the terminal device.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for pushing information provided by the embodiment of the present application is generally performed by the server 105, and accordingly, the apparatus for pushing information is generally disposed in the server 105. Optionally, the method for pushing information provided in this embodiment of the present application may also be executed by the terminal devices 101, 102, and 103, and the apparatus for pushing information is disposed in the terminal devices 101, 102, and 103. Optionally, the method for pushing information provided in the embodiment of the present application may also be executed by the server 105 and the terminal devices 101, 102, and 103 together, for example, the step of "tracking the face image in the detection target video" may be executed by the terminal devices 101, 102, and 103, and the remaining steps may be executed by the server 105; accordingly, the respective units in the apparatus for pushing information may be distributed to the server 105 and the terminal devices 101, 102, 103, respectively. This is not limited in this application.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for pushing information in accordance with the present application is shown. The method for pushing the information comprises the following steps:
step 201, tracking and detecting a face image in a target video.
In the present embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for pushing information may track a face image in a detection target video.
In this embodiment, the target video may be the video on which the subject determination is performed as a basis for implementation of the method described in this application.
Optionally, the target video may be selected from a video set by the execution main body.
Alternatively, the user may play the video using the terminal. The terminal may send the video identification of the video to the execution main body. The execution subject may first receive the video identifier. Then, the execution subject may determine the video indicated by the video identifier as the target video.
In this embodiment, the execution subject may track the face image in the detection target video in various ways.
In some optional implementation manners of this embodiment, a face image in the detection target video may also be tracked by using a pre-established face image tracking model. The face image tracking model is used for representing the corresponding relation between the video and the face image position sequence set. The set of sequences of facial image positions may include one or more sequences of facial image positions. The face image position sequence is used for indicating the position of the face image in each frame of the video. The video may include face images of multiple people. One person corresponds to one sequence of face image positions.
In some optional implementation manners of this embodiment, a target tracking technology that takes a face image as a tracking target is used to perform face tracking detection. Generally, the execution subject can analyze each video frame of the target video to detect a face image. Tracking and detection may be interleaved. As an example, in the target face image detected at the current frame, a face image indicating the same person as the target face image may be tracked in the face image detected at the next frame.
It should be noted that, in step 201, face images belonging to the same person can be detected. Step 201 does not identify who the person is.
Step 202, determining a first person identity associated with the face image in the target video based on the face information set and the face image in the target video.
In this embodiment, an executing subject (for example, a server shown in fig. 1) of the method for pushing information may determine, based on the face information set and the face image in the target video, the first person identifier associated with the face image in the target video. Here, the face information set includes face information associated with a person identification.
In the present embodiment, the face information may be information for distinguishing one face image from other face images. Optionally, the face information may be a face image or a face image feature.
In this embodiment, the personal identification may indicate the identity of the person. The person identification has uniqueness and can distinguish one person from other persons. As an example, the identity of the person represents whether the person is zhang san or lie san. The identity of the person can be the name of the person or an identity code.
Optionally, the face information may be a face image, and the execution subject may determine, through a pre-trained face recognition model, a face image in a face information set that matches the face image in the target video. And then, determining the person identity associated with the face image in the face information set as a first task identity associated with the face image in the target video.
Optionally, the face information may be a face image feature. The execution subject may extract a face image feature in the target video as a target face image feature. Then, the execution subject may determine face information in the face information set matching the face image feature by calculating a similarity, that is, a similarity between the target face image feature and the face image feature in the face information set. And then, determining the person identity mark associated with the determined face information as a first person identity mark associated with the face image in the target video.
In some optional implementations of this embodiment, the first person identifiers associated with all the face images in the target video are determined, and the first person identifiers associated with part of the face images in the target video may also be determined.
In some optional implementations of the embodiment, if the face images in the target video indicate the same person, the first person identification information associated with all the face images in the target video may be determined. If the face images in the target video indicate a plurality of persons, the associated first person identity information can be determined by using part of the face images in the target video.
Step 203, selecting candidate information corresponding to the first person identity as first candidate information from the candidate information set according to the pre-established corresponding relationship between the candidate information and the person identity.
In this embodiment, an executing entity (for example, a server shown in fig. 1) of the method for pushing information may select candidate information corresponding to the first person id from a candidate information set as first candidate information according to a pre-established correspondence between the candidate information and the person id.
In this embodiment, one or more candidate information sets may be included in the candidate information set. The candidate information may be information for presentation.
The candidate information may be in the form of text, image, video, or the like, as an example.
The candidate information may be, for example, a movie, a tv show, an advertisement, or the like.
In this embodiment, the correspondence between the person id and the candidate information may be established in advance. As an example, a correspondence between actor a and series a may be established; the corresponding relation between the actor A and the TV play B can be established; a correspondence between actor a and advertisement C may be established.
It can be understood that the first person id determined in step 202 may be compared with the person id corresponding to the candidate information to determine a person id consistent with the first person id. Then, the candidate information corresponding to the determined person identification is used as the candidate information corresponding to the first person identification. And finally, taking the candidate information corresponding to the first person identification as the first candidate information.
Step 204, pushing the first candidate information.
In the present embodiment, an executing subject (e.g., a server shown in fig. 1) of the method for pushing information may push the first candidate information to other electronic devices.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for pushing information according to the present embodiment. In the application scenario of fig. 3:
first, the user can watch a video in the video playback window 301 by using the terminal, for example, the first episode of a tv show a can be watched, and a video picture (an ancient character) is displayed in the video playback window 301. Thereafter, the terminal may transmit a video identity, which may be "drama a first episode" as an example, to the server.
The server may then take the first episode of video series a as the target video. The server can track the face images in the detection target video. As an example, a series of face images in the target video may be detected, the series of face images indicating the same person.
The server may then determine a first person identifier associated with the face image in the target video based on the face information set and the face image in the target video, for example, the first person identifier may be "actor a".
Then, the server may select candidate information corresponding to the first person id from the candidate information set as first candidate information according to a correspondence between the pre-established candidate information and the person id. For example, movie a corresponding to "actor a" may be selected as the first candidate information.
Then, the server may push the first candidate information. For example, the server may push movie-a to the terminal.
Finally, the terminal may display the content related to the first candidate information, and in the push content display box 302 of fig. 3, as an example, the content related to the first candidate information may be displayed: "actor a", "movie a" and the text reminding the user to click open movie a "click this open link".
In the method provided by the embodiment of the application, the face image in the target video is detected by tracking; then, determining a first person identity label associated with the face image in the target video based on a face information set and the face image in the target video, wherein the face information set comprises face information associated with the person identity label; then, according to the pre-established corresponding relation between the candidate information and the person identification, selecting the candidate information corresponding to the first person identification from the candidate information set as first candidate information; and finally, the first candidate information is pushed, so that the information push rich in pertinence is realized.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for pushing information is shown. The flow 400 of the method for pushing information comprises the following steps:
step 401, tracking a face image in a detection target video.
In the present embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for pushing information may track a face image in a detection target video.
In response to detecting that the face images in the target video indicate at least two people, a person identifier is assigned to the face images indicating the same people, step 402.
In this embodiment, an executing subject (e.g., a server shown in fig. 1) of the method for pushing information may assign person identifications to face images indicating the same person in response to detecting that the face images in the target video indicate at least two persons.
As an example, if two series of face images are detected in step 401, each series of face images indicates a person. As an example, step 401 detects a series of face images and B series of face images. That is, the a series of face images indicate the same person, and the B series of face images indicate the same person. The A series of face images are different from the B series of face images in the indicated person. The a-series face images may be assigned the personal identification "person a", and the B-series face images may be assigned the personal identification "person B". Note that the identity of person a or person B cannot be determined here.
Step 403, determining a predetermined number of person identifiers according to the number of face images corresponding to each person identifier.
In this embodiment, an executing subject (for example, a server shown in fig. 1) of the method for pushing information may determine a predetermined number of target person identifications according to the number of face images corresponding to each person identification.
In this embodiment, the number of face images corresponding to each person identifier is: the number of face images in the target video assigned to the character identifier.
As an example, the number of face images corresponding to the person identifier "person a" is 50, and the number of face images corresponding to the person identifier "person B" is 20.
Optionally, according to the number of the face images corresponding to each person identifier, the number of the face images with a predetermined number of larger faces may be selected, or the number of the face images with a predetermined number of smaller faces may be selected.
As an example, the predetermined number is one. Then a person identification may be determined.
As an example, the number 50 of face images corresponding to the person identifier "person a" is larger than the number 20 of face images corresponding to the person identifier "person B". The person identifier "person a" corresponding to 50 is determined as the target person identifier. It should be noted that, according to the number of the selected larger face images, the target person identifier is determined and then the candidate information is pushed, and the pushed candidate information can better reflect: candidate information related to a main person in the target video.
As an example, the number of face images 20 is smaller between 50 for the person id "person a" and 20 for the person id "person B". The person identifier "person B" corresponding to 20 is determined as the target person identifier.
Optionally, the determined target person identifier may be one or more. It should be noted that, according to the number of the selected smaller face images, the target person identifier is determined and then the candidate information is pushed, and the pushed candidate information can better reflect: candidate information related to a person in the target video that is likely to be ignored.
Step 404, for each target person identifier, determining a first person identifier associated with the face image assigned to the target person identifier according to the face information set and the face image assigned to the target person identifier.
In this embodiment, an executing subject (for example, a server shown in fig. 1) of the method for pushing information may determine, for each target person identifier, a first person identifier associated with a face image assigned to the target person identifier according to the face information set and the face image assigned to the target person identifier.
As an example, the determined target person is identified as "person a", and the first person identification associated with the face image assigned to "person a", for example, "actor a", may be determined based on the face information set and the face image assigned to "person a".
Step 405, according to the pre-established correspondence between the candidate information and the person identification, selecting the candidate information corresponding to the first person identification from the candidate information set as the first candidate information.
In this embodiment, an executing entity (for example, a server shown in fig. 1) of the method for pushing information may select candidate information corresponding to the first person id from a candidate information set as first candidate information according to a pre-established correspondence between the candidate information and the person id.
Step 406, pushing the first candidate information.
In this embodiment, an executing subject (e.g., a server shown in fig. 1) of the method for pushing information may push the first candidate information described above.
The specific operations of step 401, step 405, and step 406 in this embodiment are substantially the same as the operations of step 201, step 203, and step 204 in the embodiment shown in fig. 2, and are not described again here.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for pushing information in the present embodiment highlights: and processing the face images of a plurality of persons involved in the target video. Therefore, the scheme described in the embodiment can be used for screening the face images, determining the target person identifications with the preset number and then determining the first person identification, so that the processing amount is reduced, and the pushing speed is increased.
With further reference to fig. 5A, a flow 500 of yet another embodiment of a method for pushing information is shown. The process 500 of the method for pushing information includes the following steps:
and step 501, tracking the face image in the detection target video.
In the present embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for pushing information may track a face image in a detection target video.
Step 502, determining a first person identity associated with the face image in the target video based on the face information set and the face image in the target video.
In this embodiment, an executing subject (for example, a server shown in fig. 1) of the method for pushing information may determine, based on the face information set and the face image in the target video, the first person identifier associated with the face image in the target video. Here, the face information set includes face information associated with a person identification.
Step 503, according to the pre-established correspondence between the candidate information and the person identification, selecting the candidate information corresponding to the first person identification from the candidate information set as the first candidate information.
In this embodiment, an executing entity (for example, a server shown in fig. 1) of the method for pushing information may select candidate information corresponding to the first person id from a candidate information set as first candidate information according to a pre-established correspondence between the candidate information and the person id.
In this embodiment, the specific operations of step 501, step 502, and step 503 are substantially the same as the operations of step 201, step 202, and step 203 in the embodiment shown in fig. 2, and are not described again here.
Step 504, in response to detecting the pause playing operation for the target video, determining a target video frame displayed when the target video is paused.
In this embodiment, an executing subject (for example, a server shown in fig. 1) of the method for pushing information may determine a target video frame displayed when the target video is paused in response to detecting a pause play operation for the target video.
It should be noted that the pause playing operation performed by the user is likely that the user is interested in the paused target video frame. Therefore, the target video frame is analyzed to push information, and more accurate candidate information can be pushed according to the favorite requirements of the user.
And 505, determining a second person identity identifier associated with the face image in the target video frame based on the face information set.
In this embodiment, an executing subject (for example, the server shown in fig. 1) of the method for pushing information may determine, based on the face information set, a second person identifier associated with the face image in the target video frame.
In this embodiment, the face image recognition may be performed on the target video frame, which face image series the target face image in the target video frame belongs to is determined, and the target face image series determined by the face image series is determined.
Optionally, if a first person identification is determined for the target face image series, the first person identification may be determined as a person identification corresponding to the target face image, and the person identification may be used as a second person identification.
Optionally, if the first person identity is not determined for the target face image series before, then feature extraction may be performed on one or more face images in the target face image series, and then matching may be performed with the face image features obtained from the face information set, so as to determine face information matching the target face image series. And determining the person identity information associated with the face information obtained by matching as a second person identity.
Step 506, according to the corresponding relation, selecting candidate information corresponding to the second person identification as second candidate information from the candidate information set.
In this embodiment, an executing entity (for example, the server shown in fig. 1) of the method for pushing information may select candidate information corresponding to the second personal identifier from the candidate information set as second candidate information according to the correspondence relationship.
And 507, performing optical character recognition on the target video frame, and determining character information in the target video frame.
In this embodiment, an executing entity (for example, the server shown in fig. 1) of the method for pushing information may perform optical character recognition on the target video frame to determine the text information in the target video frame.
In this embodiment, Optical Character Recognition (OCR) may convert the text in the image into a text format. The target video frame may be an image, and OCR processing may be performed on the target video frame to determine text information in the target video frame.
In some optional implementations of this embodiment, step 507 may include: in response to detecting the target video frame comprises at least one of: and a bullet screen and a subtitle, wherein at least one of the following items can be determined from the target video frame: bullet screen text information and subtitle text information.
It should be noted that the bullet screen text information can reflect the mood and experience of the user watching the video. The subtitle text information can embody the main content of the video.
And step 508, selecting candidate information from the candidate information set as third candidate information according to the text information.
In this embodiment, an executing entity (for example, the server shown in fig. 1) of the method for pushing information may select candidate information from the candidate information set as third candidate information according to the text information.
In some optional implementation manners of this embodiment, the text information and the candidate information in the candidate information set may be matched one by one. For example, if the candidate information is information in a text form, the candidate information including all the character information may be selected as third candidate information; the candidate information including a part of the content in the text information may be selected as the third candidate information.
In some optional implementations of this embodiment, please refer to fig. 5B, which shows an implementation flow 508 of step 508, where the flow 508 may include:
step 5081, extracting keywords from the bullet screen text information and/or the subtitle text information.
In this embodiment, an execution subject (e.g., the server shown in fig. 1) of the method for pushing information may extract the keyword in the bullet text information and/or the subtitle text information.
For example, the bullet screen text information may be "actor B is nice", and the keyword extracted therefrom may be "actor B".
Step 5082, selecting candidate information corresponding to the extracted keyword from the candidate information set as third candidate information according to the pre-established correspondence between the keyword and the candidate information.
In this embodiment, an executing entity (for example, a server shown in fig. 1) of the method for pushing information may select candidate information corresponding to the extracted keyword from the candidate information set as third candidate information according to a pre-established correspondence relationship between the keyword and the candidate information.
For example, movie B corresponding to the keyword "actor B" may be selected as the third candidate information from the candidate information set.
It should be noted that the keywords are extracted from the subtitle text information, and then the selected candidate information can be close to the content of the target video to push the candidate information. The keywords are extracted from the bullet screen character information, and then the selected candidate information can be close to the mood and experience of the user to carry out candidate information pushing.
Step 509, pushing at least one of: the first candidate information, the second candidate information, and the third candidate information.
In this embodiment, an executing body (e.g., a server shown in fig. 1) of the method for pushing information may push at least one of: the first candidate information, the second candidate information, and the third candidate information.
As can be seen from fig. 5, compared with the embodiment corresponding to fig. 2, the flow 500 of the method for pushing information in this embodiment highlights the step of extracting the face image and the text information for the target video frame to which the target video is paused. Therefore, the scheme described in the embodiment introduces more information related to the target video, and can improve the comprehensiveness and accuracy of information push.
In some optional implementations of this embodiment, the method shown in this embodiment may further include: and establishing association relation information according to the first person identity associated with the face image in the target video. Here, the association relationship information is used to indicate an association relationship between the first person identifiers.
As an example, the first person identity is identified as two "actor a" and "actor B". Association relation information a indicating an association relation between "actor a" and "actor B" may be established.
In some optional implementations of this embodiment, the method shown in this embodiment may further include: and adding the incidence relation information to a pre-established incidence relation information set, wherein the incidence relation information in the incidence relation information set is used for indicating the incidence relation between at least two character identification marks.
As an example, the association relationship information a may be added to a set of association relationship information established in advance.
In some optional implementations of this embodiment, the method shown in this embodiment may further include: determining a fourth person identification having an association relation with at least one of the following items from the association relation information set: a first person identification, the second person identification, and the third person identification; according to the corresponding relation, selecting candidate information corresponding to the fourth person identification mark from the candidate information set as fourth candidate information; and pushing the fourth candidate information.
As an example, for the first person identifier "actor a", a fourth person identifier "actor C" having an association with "actor a" may be determined from the set of association information. Then, the candidate information movie C corresponding to "actor C" may be selected as the fourth candidate information. And finally, pushing the fourth candidate information.
It should be noted that, by determining the fourth person identification, the candidate information can be determined again by using the association relationship between the persons. Thus, more comprehensive candidate information can be determined and pushed.
With further reference to fig. 6, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for pushing information, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices.
As shown in fig. 6, the apparatus 600 for pushing information of the present embodiment includes: the device comprises a detection unit 601, a first determination unit 602, a first selection unit 603 and a first pushing unit 604; the detection unit is configured to track and detect a face image in a target video; a first determining unit, configured to determine a first person identity associated with a face image in the target video based on a face information set and the face image in the target video, where the face information set includes face information associated with the person identity; the first selection unit is configured to select candidate information corresponding to the first person identity as first candidate information from a candidate information set according to a pre-established correspondence between the candidate information and the person identity; and the first pushing unit is configured to push the first candidate information.
In some optional implementations of this embodiment, the first determining unit is further configured to: in response to detecting that the face images in the target video indicate at least two people, assigning person identifications to the face images indicating the same people; determining a preset number of target person identifications according to the number of the face images corresponding to the person identifications, wherein the number of the face images corresponding to each person identification is as follows: the number of face images in the target video assigned to the character identifier; and for each target person identification, determining a first person identification associated with the face image distributed to the target person identification according to the face information set and the face image distributed to the target person identification.
In some optional implementations of this embodiment, the apparatus further includes: a second determination unit (not shown) configured to determine a target video frame displayed when the target video is paused, in response to detection of a pause play operation for the target video; a third determining unit (not shown) configured to determine a second person identifier associated with the face image in the target video frame based on the face information set.
In some optional implementations of this embodiment, the apparatus further includes: a second selecting unit (not shown) configured to select candidate information corresponding to the second person id from the candidate information set according to the corresponding relationship as second candidate information; and a second pushing unit (not shown) configured to push the second candidate information.
In some optional implementations of this embodiment, the apparatus further includes: a fourth determining unit (not shown) configured to perform optical character recognition on the target video frame, and determine text information in the target video frame; a third selecting unit (not shown) configured to select candidate information from the candidate information set as third candidate information according to the text information; and a third pushing unit (not shown) configured to push the third candidate information.
In some optional implementations of this embodiment, the fourth determining unit (not shown) is further configured to: in response to detecting the target video frame comprises at least one of: and the bullet screen and the subtitle determine at least one of the following items from the target video frame: bullet screen text information and subtitle text information.
In some optional implementations of this embodiment, the third selecting unit (not shown) is further configured to: extracting key words in the bullet screen text information and/or the subtitle text information; and selecting candidate information corresponding to the extracted keyword from the candidate information set as third candidate information according to the corresponding relation between the pre-established keyword and the candidate information.
In some optional implementations of this embodiment, the apparatus further includes: and an establishing unit (not shown) configured to establish association relationship information according to the first person identification associated with the face image in the target video, where the association relationship information is used to indicate an association relationship between the first person identifications.
In some optional implementations of this embodiment, the apparatus further includes: and an adding unit (not shown) configured to add the association relationship information to a pre-established association relationship information set, where the association relationship information in the association relationship information set is used to indicate an association relationship between at least two personal identifiers.
In some optional implementations of this embodiment, the apparatus further includes: a fifth determining unit (not shown) configured to determine, from the association relation information set, a fourth person identification having an association relation with at least one of the following items: a first person identification, the second person identification, and the third person identification; a fourth selecting unit (not shown) configured to select, according to the correspondence, candidate information corresponding to the fourth person id from the candidate information set as fourth candidate information; and a fourth pushing unit (not shown) configured to push the fourth candidate information.
In this embodiment, specific processing of the detecting unit 601, the first determining unit 602, the first selecting unit 603, and the first pushing unit 604 of the apparatus 600 for pushing information and technical effects thereof may refer to related descriptions of step 201, step 202, step 203, and step 204 in the corresponding embodiment of fig. 2, which are not described herein again.
It should be noted that details of implementation and technical effects of each unit in the apparatus for pushing information provided in the embodiment of the present application may refer to descriptions of other embodiments in the present application, and are not described herein again.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use in implementing a server or terminal device of an embodiment of the present application. The server or the terminal device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An Input/Output (I/O) interface 705 is also connected to the bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprises a detection unit, a first determination unit, a first selection unit and a first pushing unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the detection unit may also be described as a "unit that tracks a face image in the detection target video".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: tracking and detecting a face image in a target video; determining a first person identity label associated with the face image in the target video based on a face information set and the face image in the target video, wherein the face information set comprises face information associated with the person identity label; selecting candidate information corresponding to the first person identity as first candidate information from a candidate information set according to a pre-established corresponding relation between the candidate information and the person identity; and pushing the first candidate information.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (19)

1. A method for pushing information, comprising:
tracking and detecting a face image in a target video;
determining a first person identity associated with a face image in the target video based on a face information set and the face image in the target video, wherein the face information set comprises face information associated with person identity, the number of the face images associated with the first person identity in the target video meets a preset condition, and the preset condition comprises: the number is greater than a preset number, or the number is less than the preset number, wherein the preset number is the number of the face images associated with any person identification marks except the first person identification mark in the target video;
selecting candidate information corresponding to the first person identity as first candidate information from a candidate information set according to a pre-established corresponding relation between the candidate information and the person identity;
and pushing the first candidate information.
2. The method of claim 1, wherein the determining, based on the set of face information and the face image in the target video, a first person identity associated with the face image in the target video comprises:
in response to detecting that the face images in the target video indicate at least two people, assigning person identifications to the face images indicating the same people;
determining a preset number of target person identifications according to the number of the face images corresponding to the person identifications, wherein the number of the face images corresponding to each person identification is as follows: the number of face images in the target video assigned to the character identifier;
and for each target person identification, determining a first person identification associated with the face image distributed to the target person identification according to the face information set and the face image distributed to the target person identification.
3. The method of claim 1, wherein the method further comprises:
in response to detecting a pause play operation for the target video, determining a target video frame displayed when the target video is paused;
and determining a second person identity mark associated with the face image in the target video frame based on the face information set.
4. The method of claim 3, wherein the method further comprises:
according to the corresponding relation, selecting candidate information corresponding to the second person identity identifier from the candidate information set as second candidate information;
and pushing the second candidate information.
5. The method of claim 3, wherein the method further comprises:
carrying out optical character recognition on the target video frame, and determining character information in the target video frame;
selecting candidate information from the candidate information set as third candidate information according to the character information;
and pushing the third candidate information.
6. The method of claim 5, wherein the performing optical character recognition on the target video frame to determine text information in the target video frame comprises:
in response to detecting the target video frame comprises at least one of: a bullet screen and a subtitle, at least one of the following is determined from the target video frame: bullet screen text information and subtitle text information.
7. The method of claim 6, wherein said selecting a third candidate message from the candidate message set according to the text message comprises:
extracting key words in the bullet screen text information and/or the subtitle text information;
and selecting candidate information corresponding to the extracted keyword from the candidate information set as third candidate information according to the corresponding relation between the pre-established keyword and the candidate information.
8. The method of claim 2, wherein the method further comprises:
and establishing incidence relation information according to a first person identity mark associated with the face image in the target video, wherein the incidence relation information is used for indicating the incidence relation between the first person identity marks.
9. The method of claim 8, wherein the method further comprises:
and adding the incidence relation information to a pre-established incidence relation information set, wherein the incidence relation information in the incidence relation information set is used for indicating the incidence relation between at least two person identification marks.
10. The method of claim 9, wherein the method further comprises:
determining a fourth person identification having an association relation with at least one of the following items from the association relation information set: a first person identification, a second person identification and a third person identification;
according to the corresponding relation, selecting candidate information corresponding to the fourth person identity identifier from the candidate information set as fourth candidate information;
pushing the fourth candidate information.
11. An apparatus for pushing information, comprising:
the detection unit is configured to track and detect a face image in a target video;
a first determining unit, configured to determine a first person identity associated with a face image in the target video based on a face information set and the face image in the target video, where the face information set includes face information associated with a person identity, and a number of the face images associated with the first person identity in the target video satisfies a preset condition, and the preset condition includes: the number is greater than a preset number, or the number is less than the preset number, wherein the preset number is the number of the face images associated with any person identification marks except the first person identification mark in the target video;
the first selection unit is configured to select candidate information corresponding to the first person identity as first candidate information from a candidate information set according to a pre-established correspondence between the candidate information and the person identity;
and the first pushing unit is configured to push the first candidate information.
12. The apparatus of claim 11, wherein the first determining unit is further configured to:
in response to detecting that the face images in the target video indicate at least two people, assigning person identifications to the face images indicating the same people;
determining a preset number of target person identifications according to the number of the face images corresponding to the person identifications, wherein the number of the face images corresponding to each person identification is as follows: the number of face images in the target video assigned to the character identifier;
and for each target person identification, determining a first person identification associated with the face image distributed to the target person identification according to the face information set and the face image distributed to the target person identification.
13. The apparatus of claim 11, wherein the apparatus further comprises:
a second determination unit configured to determine a target video frame displayed when the target video is paused, in response to detection of a pause play operation for the target video;
and the third determining unit is configured to determine a second person identity identifier associated with the face image in the target video frame based on the face information set.
14. The apparatus of claim 13, wherein the apparatus further comprises:
the second selection unit is configured to select candidate information corresponding to the second person identity identifier from the candidate information set according to the corresponding relation to serve as second candidate information;
and the second pushing unit is configured to push the second candidate information.
15. The apparatus of claim 13, wherein the apparatus further comprises:
the fourth determining unit is configured to perform optical character recognition on the target video frame and determine character information in the target video frame;
the third selection unit is configured to select candidate information from the candidate information set as third candidate information according to the text information;
and the third pushing unit is configured to push the third candidate information.
16. The apparatus of claim 15, wherein the fourth determining unit is further configured to:
in response to detecting the target video frame comprises at least one of: a bullet screen and a subtitle, at least one of the following is determined from the target video frame: bullet screen text information and subtitle text information.
17. The apparatus according to claim 16, wherein the third selecting unit is further configured to:
extracting key words in the bullet screen text information and/or the subtitle text information;
and selecting candidate information corresponding to the extracted keyword from the candidate information set as third candidate information according to the corresponding relation between the pre-established keyword and the candidate information.
18. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-10.
19. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-10.
CN201810295444.0A 2018-03-30 2018-03-30 Method and device for pushing information Active CN108509611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810295444.0A CN108509611B (en) 2018-03-30 2018-03-30 Method and device for pushing information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810295444.0A CN108509611B (en) 2018-03-30 2018-03-30 Method and device for pushing information

Publications (2)

Publication Number Publication Date
CN108509611A CN108509611A (en) 2018-09-07
CN108509611B true CN108509611B (en) 2021-11-12

Family

ID=63380232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810295444.0A Active CN108509611B (en) 2018-03-30 2018-03-30 Method and device for pushing information

Country Status (1)

Country Link
CN (1) CN108509611B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598188A (en) * 2018-10-16 2019-04-09 深圳壹账通智能科技有限公司 Information-pushing method, device, computer equipment and storage medium
CN111259682B (en) * 2018-11-30 2023-12-29 百度在线网络技术(北京)有限公司 Method and device for monitoring safety of construction site
CN109815818B (en) * 2018-12-25 2020-12-08 深圳市天彦通信股份有限公司 Target person tracking method, system and related device
CN109886258A (en) * 2019-02-19 2019-06-14 新华网(北京)科技有限公司 The method, apparatus and electronic equipment of the related information of multimedia messages are provided
CN112148833B (en) * 2019-06-27 2023-08-08 百度在线网络技术(北京)有限公司 Information pushing method, server, terminal and electronic equipment
CN111310731B (en) * 2019-11-15 2024-04-09 腾讯科技(深圳)有限公司 Video recommendation method, device, equipment and storage medium based on artificial intelligence
CN110996121A (en) * 2019-12-11 2020-04-10 北京市商汤科技开发有限公司 Information processing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103744858A (en) * 2013-12-11 2014-04-23 深圳先进技术研究院 Method and system for pushing information
CN103870559A (en) * 2014-03-06 2014-06-18 海信集团有限公司 Method and equipment for obtaining information based on played video
CN105578222A (en) * 2016-02-01 2016-05-11 百度在线网络技术(北京)有限公司 Information push method and device
CN107291810A (en) * 2017-05-18 2017-10-24 深圳云天励飞技术有限公司 Data processing method, device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170098122A1 (en) * 2010-06-07 2017-04-06 Affectiva, Inc. Analysis of image content with associated manipulation of expression presentation
CN104239338A (en) * 2013-06-19 2014-12-24 阿里巴巴集团控股有限公司 Information recommendation method and information recommendation device
CN104254019B (en) * 2013-06-28 2019-12-13 广州华多网络科技有限公司 information push result detection method and system
CN104618803B (en) * 2014-02-26 2018-05-08 腾讯科技(深圳)有限公司 Information-pushing method, device, terminal and server
US20160267543A1 (en) * 2015-03-12 2016-09-15 WeLink, Inc. Targeting and channeling digital advertisements to social media users through location-based social monitoring
CN106845738A (en) * 2015-12-03 2017-06-13 小米科技有限责任公司 Crowd's number method for pushing and device
CN107748879A (en) * 2017-11-16 2018-03-02 百度在线网络技术(北京)有限公司 For obtaining the method and device of face information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103744858A (en) * 2013-12-11 2014-04-23 深圳先进技术研究院 Method and system for pushing information
CN103870559A (en) * 2014-03-06 2014-06-18 海信集团有限公司 Method and equipment for obtaining information based on played video
CN105578222A (en) * 2016-02-01 2016-05-11 百度在线网络技术(北京)有限公司 Information push method and device
CN107291810A (en) * 2017-05-18 2017-10-24 深圳云天励飞技术有限公司 Data processing method, device and storage medium

Also Published As

Publication number Publication date
CN108509611A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN108509611B (en) Method and device for pushing information
CN111143610B (en) Content recommendation method and device, electronic equipment and storage medium
US10075742B2 (en) System for social media tag extraction
CN110378732B (en) Information display method, information association method, device, equipment and storage medium
KR102068790B1 (en) Estimating and displaying social interest in time-based media
US10333767B2 (en) Methods, systems, and media for media transmission and management
CN110582025A (en) Method and apparatus for processing video
CN108776676B (en) Information recommendation method and device, computer readable medium and electronic device
CN110740389B (en) Video positioning method, video positioning device, computer readable medium and electronic equipment
CN109255037B (en) Method and apparatus for outputting information
US11800201B2 (en) Method and apparatus for outputting information
CN109255035B (en) Method and device for constructing knowledge graph
CN111209431A (en) Video searching method, device, equipment and medium
CN110019948B (en) Method and apparatus for outputting information
CN109862100B (en) Method and device for pushing information
CN113315988B (en) Live video recommendation method and device
CN111897950A (en) Method and apparatus for generating information
WO2020078050A1 (en) Comment information processing method and apparatus, and server, terminal and readable medium
CN111046292A (en) Live broadcast recommendation method and device, computer-readable storage medium and electronic device
CN112559800A (en) Method, apparatus, electronic device, medium, and product for processing video
CN108038172B (en) Search method and device based on artificial intelligence
CN109241344B (en) Method and apparatus for processing information
CN112148962B (en) Method and device for pushing information
CN113343069A (en) User information processing method, device, medium and electronic equipment
CN111259194B (en) Method and apparatus for determining duplicate video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant