CN109640119B - Method and device for pushing information - Google Patents

Method and device for pushing information Download PDF

Info

Publication number
CN109640119B
CN109640119B CN201910132764.9A CN201910132764A CN109640119B CN 109640119 B CN109640119 B CN 109640119B CN 201910132764 A CN201910132764 A CN 201910132764A CN 109640119 B CN109640119 B CN 109640119B
Authority
CN
China
Prior art keywords
information
video information
user
video
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910132764.9A
Other languages
Chinese (zh)
Other versions
CN109640119A (en
Inventor
李彤辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910132764.9A priority Critical patent/CN109640119B/en
Publication of CN109640119A publication Critical patent/CN109640119A/en
Application granted granted Critical
Publication of CN109640119B publication Critical patent/CN109640119B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Abstract

The embodiment of the application discloses a method and a device for pushing information. One embodiment of the above method comprises: responding to the fact that the browsed page of the user comprises video information, and acquiring a face image of the user; recognizing the expression of a user in the face image; selecting target video information from the video information based on the identified expression; and pushing related information of the target video information. According to the embodiment, the appropriate video can be pushed to the user according to the expression of the user, and the time for the user to select the appropriate video is saved.

Description

Method and device for pushing information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for pushing information.
Background
With the development of technology, people have higher and higher requirements on video playing devices (such as smart televisions, tablet computers and the like). It is expected that these video playback devices can screen out videos suitable for themselves according to themselves, so as to save time for people to select suitable videos.
Disclosure of Invention
The embodiment of the application provides a method and a device for pushing information.
In a first aspect, an embodiment of the present application provides a method for pushing information, including: responding to the fact that the webpage browsed by the user comprises video information, and acquiring a face image of the user; recognizing the expression of the user in the face image; selecting target video information from the video information based on the identified expression; and pushing the related information of the target video information.
In some embodiments, the video information includes tag information; and selecting target video information from the video information based on the identified expression, including: determining a type of the identified expression; determining tag information matched with the type in the tag information; and taking the video information to which the matched label information belongs as target video information.
In some embodiments, the above method further comprises: identifying the age of the user in the face image; and selecting target video information from the video information based on the identified expression, including: and selecting target video information from the video information according to the age of the user and the identified expression.
In some embodiments, the above method further comprises: in the video playing process, at least one face image of at least one viewer is obtained; performing expression recognition on the at least one face image; and generating the viewing information of the video according to the expression recognition result.
In some embodiments, the above method further comprises: and pushing video replacement information in response to the fact that the expression recognition result comprises a preset number of expressions of preset types.
In some embodiments, the video information includes a video type; and the related information for pushing the target video information includes: in response to the fact that the video type is determined to be the stereoscopic type, determining whether the user wears glasses or not according to the face image; in response to determining that the user is wearing glasses, pushing prompt information for selecting a type of glasses.
In a second aspect, an embodiment of the present application provides an apparatus for pushing information, including: the image processing device comprises a first image acquisition unit, a second image acquisition unit and a display unit, wherein the first image acquisition unit is configured to respond to the detection that video information is included in a page browsed by a user and acquire a face image of the user; a first expression recognition unit configured to recognize an expression of a user in the face image; a video information selecting unit configured to select target video information from the video information based on the identified expression; and the related information pushing unit is configured to push the related information of the target video information.
In some embodiments, the video information includes tag information; and the video information selecting unit is further configured to: determining a type of the identified expression; determining tag information matched with the type in the tag information; and taking the video information to which the matched label information belongs as target video information.
In some embodiments, the above apparatus further comprises: an age identifying unit configured to identify an age of the user in the face image; and the video information selecting unit is further configured to: and selecting target video information from the video information according to the age of the user and the identified expression.
In some embodiments, the above apparatus further comprises: the second image acquisition unit is configured to acquire at least one face image of at least one viewer in the video playing process; a second expression recognition unit configured to perform expression recognition on the at least one face image; and the viewing information generating unit is configured to generate the viewing information of the video according to the expression recognition result.
In some embodiments, the above apparatus further comprises: and the change information pushing unit is configured to push the video change information in response to the fact that the expression recognition result comprises a preset number of expressions of preset types.
In some embodiments, the video information includes a video type; and the related information pushing unit is further configured to: in response to the fact that the video type is determined to be the stereoscopic type, determining whether the user wears glasses or not according to the face image; in response to determining that the user is wearing glasses, pushing prompt information for selecting a type of glasses.
In a third aspect, an embodiment of the present application provides a server, including: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the embodiments of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method as described in any one of the embodiments of the first aspect.
According to the method and the device for pushing the information, the face image of the user can be obtained after the fact that the page browsed by the user comprises the video information is detected. Then, the expression of the user in the face image is recognized. And then, selecting target video information from the video information according to the identified expression. And finally, pushing the related information of the target video information. Therefore, the appropriate video can be pushed to the user according to the expression of the user, and the time for the user to select the appropriate video is saved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for pushing information, according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for pushing information according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for pushing information according to the present application;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for pushing information according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the present method for pushing information or apparatus for pushing information may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablets, laptop and desktop computers, ticket machines for movie theaters, and so on. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for web pages displayed on the terminal devices 101, 102, 103. The background server may analyze and process the received data such as the web page request, and feed back the processing result (e.g., the related information of the video information) to the terminal devices 101, 102, and 103.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for pushing information provided by the embodiment of the present application is generally performed by the server 105, and accordingly, the apparatus for pushing information is generally disposed in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for pushing information in accordance with the present application is shown. The method for pushing information of the embodiment comprises the following steps:
step 201, in response to detecting that the page browsed by the user includes video information, acquiring a face image of the user.
In this embodiment, an execution subject (for example, the server 105 shown in fig. 1) of the method for pushing information may obtain an identifier (for example, a web address) of a page browsed by the user through a wired connection manner or a wireless connection manner to determine the page browsed by the user. And then detecting whether the page comprises video information of at least one video. The page may be a ticketing application installed on the terminal or a page displayed by a ticketing machine. The page may include video information, which may include movie information. After the execution subject determines that the page browsed by the user includes video information, a face image of the user can be acquired. Specifically, the execution main body may send an image acquisition instruction to an image acquisition device in communication connection therewith, and acquire a face image of the user. Or sending an image acquisition instruction to the terminal, and acquiring the face image of the user by the terminal by using the connected image acquisition device.
It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
Step 202, recognizing the expression of the user in the face image.
After the face image is obtained, the execution main body can perform expression recognition on the face image to obtain the expression of the user. It is understood that expression recognition is a prior art and will not be described in detail herein. The types of expressions of the user may include happy, sad, painful, etc.
And step 203, selecting target video information from the video information based on the identified expression.
The execution subject may select target video information from the above video information based on the expression of the user. For example, if the expression of the user is sad, the executing agent may select the video information that is laughted from the video information as the target video information.
In some optional implementations of this embodiment, the video information includes tag information. The above tag information is used to describe the video. For example, the tag information may be fun, comedy, suspense, science fiction, military affairs, and the like. The step 203 may be implemented by the following steps not shown in fig. 2: a type of the identified expression is determined. And determining the label information matched with the type in the label information. And taking the video information to which the matched label information belongs as target video information.
In this implementation, the execution subject may first determine the type of expression of the user. Then, the above-described types are matched with the tag information. Here, the matching may mean that the type and the tag information are similar words, or the number of characters having the same type and the tag information is greater than a preset value, and the like. The execution subject may take the video information to which the matched tag information belongs as the target video information.
In some alternative implementations of this embodiment, the video information includes an image. The executing agent may also determine the target video information by the following steps not shown in fig. 2: it is determined whether an image included in the video information includes a face image. And in response to the fact that the image comprises a face, performing expression recognition on the face, and determining the type of the expression of the face. And taking the video information corresponding to the same type as the type of the expression of the user as the target video information.
In this implementation, the image may be a promotional poster of a movie. Typically, a promotional poster for a movie will include an image of the character played by the actor in the play. The execution subject may perform expression recognition on the face in the image. And when determining that the expression of the actor in the propaganda poster of one or more video information is the same as that of the user, taking the one or more video information as target video information.
In some optional implementations of this embodiment, the method may further include the following steps not shown in fig. 2: and identifying the age of the user in the face image.
In this implementation, the execution subject may identify the age of the user in the face image in various ways, for example, by a convolutional neural network.
Accordingly, the step 203 can be implemented by the following steps not shown in fig. 2: and selecting target video information from the video information according to the age of the user and the identified expression.
The execution subject may refer to the age and the expression of the user at the same time when determining the target video information. For example, when the execution subject recognizes that the age of the user is a child, it may first determine the video information of the cartoon type as the video information to be selected from the video information. And then selecting target video information from the video information to be selected according to the expression of the child.
And step 204, pushing related information of the target video information.
After determining the target video information, the execution subject may push the relevant information of the target video to the terminal used by the user. Here, the related information may refer to information related to video information. For example, the related information may include the same video name as the target video information, or the target video information may include the same actor name. Taking the video information as the movie information for example, the related information may be a movie feature, a movie slide, a theme song video, other movie works of the same director, or at least one movie including at least one actor of the movie.
In some optional implementations of the embodiment, the video information includes a video name, an actor name, and a director name. The execution subject may determine the relevant information of the target video information by the following steps not shown in fig. 2: and determining related information which is the same as at least one of video names, actor names and director names in the target video information in a preset related information set as related information of the target video information.
In this implementation manner, videos related to the target video information can be pushed to guide the user to select related movies.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for pushing information according to the present embodiment. In the application scenario of fig. 3, the user browses the ticket purchase page through the movie ticket purchase application installed in the mobile phone. The ticket purchasing page comprises a plurality of movie information being shown, including movie information AAA and movie information BBB. At the moment, the front camera of the mobile phone collects the face image of the user. And sends the face image to the server. And the server identifies that the type of the expression of the user is sad, and then selects the movie information BBB as the target video information from the movie information. Then, the shooting catkins and the slides of the movie information BBB are pushed to the mobile phone.
In some optional implementations of this embodiment, the video information includes a video type. The video type is used to indicate whether the video is a stereoscopic (3D, 3Dimensions) type video. The step 203 may be implemented by the following steps not shown in fig. 2: in response to determining that the video type is a stereoscopic type, determining whether the user wears glasses according to the face image; in response to determining that the user is wearing glasses, pushing prompt information for selecting a type of glasses.
The implementation mode can be applied to movie ticket purchasing scenes, and if the execution main body determines that the video type is the 3D type, the execution main body can determine whether the face in the face image is provided with the glasses or not. The execution subject may first determine the feature points of the human eyes in the face image, and then determine whether there are feature points of the glasses around the feature points of the human eyes. If so, it is determined that the user is wearing glasses. And if the user is determined to wear the glasses, pushing prompt information for selecting the type of the glasses. The prompt message can be used for prompting the user to select the clip type 3D glasses and can also be used for prompting the movie theater to prepare the clip type 3D glasses for the user.
According to the method for pushing the information provided by the embodiment of the application, after the fact that the page browsed by the user comprises the video information is detected, the face image of the user can be obtained. Then, the expression of the user in the face image is recognized. And then, selecting target video information from the video information according to the identified expression. And finally, pushing the related information of the target video information. Therefore, the appropriate video can be pushed to the user according to the expression of the user, and the time for the user to select the appropriate video is saved.
With continued reference to fig. 4, a flow 400 of another embodiment of a method for pushing information in accordance with the present application is shown. As shown in fig. 4, the method of this embodiment may further include the following steps:
step 401, in the video playing process, at least one face image of at least one viewer is obtained.
In this embodiment, in the video playing process, the execution main body may further control an image acquisition device in communication connection therewith to acquire at least one facial image of at least one viewer. The image acquisition device can acquire the face images of the audiences for many times in the video playing process. It will be appreciated that the means by which the video is played may not be the same as the means by which the page is displayed in the embodiment shown in figure 2. The viewer here may be the same as the user in the embodiment shown in fig. 2 or may be different.
And 402, performing expression recognition on at least one face image.
After the execution main body acquires the at least one face image, the execution main body can perform expression recognition on the face image. And obtaining the expression of at least one face in at least one face image to obtain an expression recognition result. The expression recognition result may include the type of expression, such as laughing, smiling, lacrimation, frown, and the like.
And step 403, generating viewing information of the video according to the expression recognition result.
And the execution main body can generate the viewing information of the video after obtaining the expression recognition result. The viewing information may include a statistical result of the expression. For example, may include laughing 3 times, smiling 2 times, etc.
Step 404, in response to determining that the expression recognition result includes a preset number of expressions of preset types, pushing video replacement information.
If the execution main body determines that the expression recognition result comprises a preset number of expressions of preset types, video replacement information can be pushed. For example, the execution subject may determine whether the number of frowns included in the expression recognition result exceeds 10, and if the number exceeds 10, push the video exchange information. The video replacement information can be used to prompt the user whether the video to be played needs to be replaced.
The method for pushing the information provided by the above embodiment of the application can collect the face image of the user in the video playing process, and generate the viewing information and push the video replacement information according to the expression of the user. Therefore, the user does not need to manually input the film watching information; and when the user does not like the currently played video, the user can be prompted to change the video, so that the user experience is improved.
With further reference to fig. 5, as an implementation of the method shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for pushing information, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices.
As shown in fig. 5, the apparatus 500 for pushing information of the present embodiment includes: a first image obtaining unit 501, a first expression recognizing unit 502, a video information selecting unit 503 and a related information pushing unit 504.
The first image acquiring unit 501 is configured to acquire a face image of a user in response to detection that video information is included in a page browsed by the user.
A first expression recognition unit 502 configured to recognize an expression of the user in the face image.
A video information selecting unit 503 configured to select target video information from the video information based on the identified expression.
A related information pushing unit 504 configured to push related information of the target video information.
In some optional implementations of this embodiment, the video information includes tag information. The video information selecting unit 503 may be further configured to: determining a type of the identified expression; determining label information matched with the type in the label information; and taking the video information to which the matched label information belongs as target video information.
In some optional implementations of this embodiment, the apparatus 500 may further include an age identifying unit, not shown in fig. 5, configured to identify an age of the user in the face image. Accordingly, the video information selecting unit 503 is further configured to: and selecting target video information from the video information according to the age of the user and the identified expression.
In some optional implementations of this embodiment, the apparatus 500 may further include a second image acquiring unit, a second expression recognizing unit, and a viewing information generating unit, which are not shown in fig. 5.
And the second image acquisition unit is configured to acquire at least one face image of at least one viewer in the video playing process.
And the second expression recognition unit is configured to perform expression recognition on the at least one face image.
And the viewing information generating unit is configured to generate the viewing information of the video according to the expression recognition result.
In some optional implementations of the present embodiment, the apparatus 500 may further include a replacement information pushing unit, not shown in fig. 5, configured to push the video replacement information in response to determining that a preset number of expressions of preset types are included in the expression recognition result.
In some optional implementations of this embodiment, the video information includes a video type. The related information pushing unit 504 may be further configured to: in response to determining that the video type is a stereoscopic type, determining whether the user wears glasses according to the face image; in response to determining that the user is wearing glasses, pushing prompt information for selecting a type of glasses.
The device for pushing information provided by the above embodiment of the application can acquire the face image of the user after detecting that the page browsed by the user includes video information. Then, the expression of the user in the face image is recognized. And then, selecting target video information from the video information according to the identified expression. And finally, pushing the related information of the target video information. Therefore, the appropriate video can be pushed to the user according to the expression of the user, and the time for the user to select the appropriate video is saved.
It should be understood that units 501 to 504, which are described in the apparatus 500 for pushing information, correspond to respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for pushing information are also applicable to the apparatus 500 and the units included therein, and are not described in detail here.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a server according to embodiments of the present application. The server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprises a first image acquisition unit, a first expression recognition unit, a video information selection unit and a related information pushing unit. The names of these units do not in some cases constitute a limitation to the unit itself, for example, the related information pushing unit may also be described as a "unit that pushes related information of the target video information".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: responding to the fact that the browsed page of the user comprises video information, and acquiring a face image of the user; recognizing the expression of a user in the face image; selecting target video information from the video information based on the identified expression; and pushing related information of the target video information.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (12)

1. A method for pushing information, comprising:
responding to the fact that a page browsed by a user comprises video information, and obtaining a face image of the user, wherein the video information comprises a video type;
recognizing the expression of the user in the face image;
selecting target video information from the video information based on the identified expression;
pushing related information of the target video information;
the pushing the relevant information of the target video information comprises:
in response to the fact that the video type is determined to be a stereoscopic type, determining whether the user wears glasses or not according to the face image;
in response to determining that the user is wearing glasses, pushing prompt information for selecting a type of glasses.
2. The method of claim 1, wherein the video information comprises tag information; and
the selecting target video information from the video information based on the identified expression comprises:
determining a type of the identified expression;
determining label information matched with the type in the label information;
and taking the video information to which the matched label information belongs as target video information.
3. The method of claim 1, wherein the method further comprises:
identifying the age of the user in the face image; and
the selecting target video information from the video information based on the identified expression comprises:
and selecting target video information from the video information according to the age of the user and the identified expression.
4. The method of claim 1, wherein the method further comprises:
in the video playing process, at least one face image of at least one viewer is obtained;
performing expression recognition on the at least one facial image;
and generating the viewing information of the video according to the expression recognition result.
5. The method of claim 4, wherein the method further comprises:
and pushing video replacement information in response to the fact that the expression recognition result comprises a preset number of expressions of preset types.
6. An apparatus for pushing information, comprising:
the image processing device comprises a first image acquisition unit, a second image acquisition unit and a display unit, wherein the first image acquisition unit is configured to respond to the fact that video information is included in a page browsed by a user, and acquire a face image of the user, and the video information comprises a video type;
a first expression recognition unit configured to recognize an expression of a user in the face image;
a video information selecting unit configured to select target video information from the video information based on the identified expression;
a related information pushing unit configured to push related information of the target video information;
the related information pushing unit is further configured to:
in response to the fact that the video type is determined to be a stereoscopic type, determining whether the user wears glasses or not according to the face image;
in response to determining that the user is wearing glasses, pushing prompt information for selecting a type of glasses.
7. The apparatus of claim 6, wherein the video information comprises tag information; and
the video information selection unit is further configured to:
determining a type of the identified expression;
determining label information matched with the type in the label information;
and taking the video information to which the matched label information belongs as target video information.
8. The apparatus of claim 6, wherein the apparatus further comprises:
an age identifying unit configured to identify an age of a user in the face image; and
the video information selection unit is further configured to:
and selecting target video information from the video information according to the age of the user and the identified expression.
9. The apparatus of claim 6, wherein the apparatus further comprises:
the second image acquisition unit is configured to acquire at least one face image of at least one viewer in the video playing process;
a second expression recognition unit configured to perform expression recognition on the at least one face image;
and the viewing information generating unit is configured to generate the viewing information of the video according to the expression recognition result.
10. The apparatus of claim 9, wherein the apparatus further comprises:
a change information pushing unit configured to push video change information in response to determining that a preset number of expressions of preset types are included in the expression recognition result.
11. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201910132764.9A 2019-02-21 2019-02-21 Method and device for pushing information Active CN109640119B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910132764.9A CN109640119B (en) 2019-02-21 2019-02-21 Method and device for pushing information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910132764.9A CN109640119B (en) 2019-02-21 2019-02-21 Method and device for pushing information

Publications (2)

Publication Number Publication Date
CN109640119A CN109640119A (en) 2019-04-16
CN109640119B true CN109640119B (en) 2021-06-11

Family

ID=66065823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910132764.9A Active CN109640119B (en) 2019-02-21 2019-02-21 Method and device for pushing information

Country Status (1)

Country Link
CN (1) CN109640119B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110225196B (en) * 2019-05-30 2021-01-26 维沃移动通信有限公司 Terminal control method and terminal equipment
CN111079555A (en) * 2019-11-25 2020-04-28 Oppo广东移动通信有限公司 User preference degree determining method and device, electronic equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456008A (en) * 2013-08-26 2013-12-18 刘晓英 Method for matching face and glasses
CN103563367A (en) * 2011-04-04 2014-02-05 日立麦克赛尔株式会社 Video display system, display device, and display method
CN104581391A (en) * 2015-01-19 2015-04-29 无锡桑尼安科技有限公司 Television broadcasting control equipment based on picture content detection
CN106384083A (en) * 2016-08-31 2017-02-08 上海交通大学 Automatic face expression identification and information recommendation method
CN106407418A (en) * 2016-09-23 2017-02-15 Tcl集团股份有限公司 A face identification-based personalized video recommendation method and recommendation system
CN108446390A (en) * 2018-03-22 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN108459893A (en) * 2018-01-23 2018-08-28 维沃移动通信有限公司 A kind of based reminding method, terminal and computer readable storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103391457A (en) * 2012-05-07 2013-11-13 山东沃飞电子科技有限公司 Playing method, terminal equipment and network system of recommendation shows
KR20160029269A (en) * 2014-09-05 2016-03-15 엘지전자 주식회사 Multimedia device and server for providing information of recommend content to the same
CN104917896A (en) * 2015-06-12 2015-09-16 努比亚技术有限公司 Data pushing method and terminal equipment
CN105045837A (en) * 2015-06-30 2015-11-11 百度在线网络技术(北京)有限公司 Information searching method and information searching device
CN105956059A (en) * 2016-04-27 2016-09-21 乐视控股(北京)有限公司 Emotion recognition-based information recommendation method and apparatus
CN107633098A (en) * 2017-10-18 2018-01-26 维沃移动通信有限公司 A kind of content recommendation method and mobile terminal
CN108491496A (en) * 2018-03-19 2018-09-04 重庆首卓网络信息科技有限公司 A kind of processing method and processing device of promotion message
CN108536803A (en) * 2018-03-30 2018-09-14 百度在线网络技术(北京)有限公司 Song recommendations method, apparatus, equipment and computer-readable medium
CN108509660A (en) * 2018-05-29 2018-09-07 维沃移动通信有限公司 A kind of broadcasting object recommendation method and terminal device
CN109189953A (en) * 2018-08-27 2019-01-11 维沃移动通信有限公司 A kind of selection method and device of multimedia file

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103563367A (en) * 2011-04-04 2014-02-05 日立麦克赛尔株式会社 Video display system, display device, and display method
CN103456008A (en) * 2013-08-26 2013-12-18 刘晓英 Method for matching face and glasses
CN104581391A (en) * 2015-01-19 2015-04-29 无锡桑尼安科技有限公司 Television broadcasting control equipment based on picture content detection
CN106384083A (en) * 2016-08-31 2017-02-08 上海交通大学 Automatic face expression identification and information recommendation method
CN106407418A (en) * 2016-09-23 2017-02-15 Tcl集团股份有限公司 A face identification-based personalized video recommendation method and recommendation system
CN108459893A (en) * 2018-01-23 2018-08-28 维沃移动通信有限公司 A kind of based reminding method, terminal and computer readable storage medium
CN108446390A (en) * 2018-03-22 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information

Also Published As

Publication number Publication date
CN109640119A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
US11023716B2 (en) Method and device for generating stickers
CN111711828B (en) Information processing method and device and electronic equipment
CN111447489A (en) Video processing method and device, readable medium and electronic equipment
CN108509611B (en) Method and device for pushing information
CN113992934B (en) Multimedia information processing method, device, electronic equipment and storage medium
CN113792181A (en) Video recommendation method, device, equipment and medium
CN110858134A (en) Data, display processing method and device, electronic equipment and storage medium
CN112153422B (en) Video fusion method and device
CN111897950A (en) Method and apparatus for generating information
CN109640119B (en) Method and device for pushing information
CN113589991A (en) Text input method and device, electronic equipment and storage medium
CN111935155A (en) Method, apparatus, server and medium for generating target video
CN109241344B (en) Method and apparatus for processing information
CN112784103A (en) Information pushing method and device
US11750876B2 (en) Method and apparatus for determining object adding mode, electronic device and medium
CN111310086A (en) Page jump method and device and electronic equipment
CN112307393A (en) Information issuing method and device and electronic equipment
CN115209215A (en) Video processing method, device and equipment
CN111125501B (en) Method and device for processing information
CN115086734A (en) Information display method, device, equipment and medium based on video
CN111897951A (en) Method and apparatus for generating information
CN112306976A (en) Information processing method and device and electronic equipment
CN110188712B (en) Method and apparatus for processing image
CN112395530A (en) Data content output method and device, electronic equipment and computer readable medium
CN110909206B (en) Method and device for outputting information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant