CN108446385A - Method and apparatus for generating information - Google Patents
Method and apparatus for generating information Download PDFInfo
- Publication number
- CN108446385A CN108446385A CN201810235162.1A CN201810235162A CN108446385A CN 108446385 A CN108446385 A CN 108446385A CN 201810235162 A CN201810235162 A CN 201810235162A CN 108446385 A CN108446385 A CN 108446385A
- Authority
- CN
- China
- Prior art keywords
- identification information
- target
- personal identification
- video
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/735—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
- G06F16/784—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
Abstract
The embodiment of the present application discloses the method and apparatus for generating information.One specific implementation mode of this method includes:Obtain the information of the concern personage of target user's concern;Obtain the corresponding personal identification information set of target video, wherein personal identification information is the identification information for the personage for carrying out the obtained facial image characterization of recognition of face to the video frame of target video in advance;It determines and whether there is target person identification information in personal identification information set, wherein the personage of target person identification information characterization is related to concern personage;Exist in response to determining, generates and the relevant recommendation information of target video.This embodiment improves the specific aims for generating recommendation information.
Description
Technical field
The invention relates to field of computer technology, and in particular to the method and apparatus for generating information.
Background technology
With the development of Internet technology, internet becomes important component indispensable in people's life.It is logical
Internet is crossed, people can easily browse the video in video website.Meanwhile with terminal device (such as mobile phone and
TV) it is universal, people can be by above-mentioned terminal device easily from interconnection online browsing video.
Video website often recommends video to attract user to watch the video in this video website to user.When with
When browsing video in video website, video website will record the behavioral data of user, i.e., currently browsing user at family
The videograph of either this once browsed website get off to will be currently being browsed with user or once browsed
The similar video of the video crossed recommends user on the webpage of the video website.Above-mentioned " similar video " is generally used people
The mode carefully and neatly managed artificially adds some labels or classification to video.For example, the protagonist of certain film is " king XX ", then to this
Film adds label " king XX ".
Invention content
The embodiment of the present application proposes the method and apparatus for generating information.
In a first aspect, the embodiment of the present application provides a kind of method for generating information, this method includes:Obtain target
The information of the concern personage of user's concern;Obtain the corresponding personal identification information set of target video, wherein personal identification information
For the identification information for the personage that the obtained facial image of recognition of face characterizes is carried out to the video frame of the target video in advance;
It determines and whether there is target person identification information in the personal identification information set, wherein the target person identification information
The personage of characterization is related to the concern personage;Exist in response to determining, generates and the relevant recommendation information of the target video.
In some embodiments, the personal identification information set is established as follows in advance:Obtain target video
Video frame;The facial image that the acquired video frame of identification includes, obtains the characteristic of identified facial image;It will
Obtained characteristic is matched with the face characteristic data in preset face characteristic data acquisition system, is obtained at least one
Target face characteristic;Based on the target face characteristic pair in preset at least one target face characteristic
The personal identification information answered determines personal identification information set.
In some embodiments, the target face based in preset at least one target face characteristic
The corresponding personal identification information of characteristic determines personal identification information set, including:For at least one target face
The corresponding personal identification information of each target face characteristic in characteristic, based in the target video, the people
The frame per second of video frame and the target video where the facial image of object identification information characterization, determines the personal identification information table
The duration that the facial image of sign occurs in the target video;In response to determining that duration is more than preset time threshold, extraction should
Personal identification information;The each personal identification information extracted is determined as personal identification information set.
In some embodiments, the facial image that the video frame acquired in the identification includes, obtains identified people
The characteristic of face image, including:By acquired video frame input convolutional neural networks trained in advance, video frame packet is obtained
The characteristic of the facial image included, wherein the convolutional neural networks are used to characterize the people that video frame includes with video frame
The correspondence of the characteristic of face image.
In some embodiments, the information of the concern personage for obtaining target user's concern, including:Obtain target user
User information;Based on the user information, the information of the concern personage of target user's concern is obtained.
Second aspect, the embodiment of the present application provide a kind of device for generating information, which includes:First obtains
Unit is configured to obtain the information of the concern personage of target user's concern;Second acquisition unit is configured to acquisition target and regards
Frequently corresponding personal identification information set, wherein personal identification information is in advance to the video frame of the target video into pedestrian
Face identifies the identification information of the personage of obtained facial image characterization;Determination unit is configured to determine the character recognition and label
Whether there is target person identification information in information aggregate, wherein the personage of target person identification information characterization with it is described
It is related to pay close attention to personage;Generation unit is configured to exist in response to determining, generate and the relevant recommendation of the target video
Breath.
In some embodiments, the personal identification information set is established as follows in advance:Obtain target video
Video frame;The facial image that the acquired video frame of identification includes, obtains the characteristic of identified facial image;It will
Obtained characteristic is matched with the face characteristic data in preset face characteristic data acquisition system, is obtained at least one
Target face characteristic;Based on the target face characteristic pair in preset at least one target face characteristic
The personal identification information answered determines personal identification information set.
In some embodiments, the target face based in preset at least one target face characteristic
The corresponding personal identification information of characteristic determines personal identification information set, including:For at least one target face
The corresponding personal identification information of each target face characteristic in characteristic, based in the target video, the people
The frame per second of video frame and the target video where the facial image of object identification information characterization, determines the personal identification information table
The duration that the facial image of sign occurs in the target video;In response to determining that duration is more than preset time threshold, extraction should
Personal identification information;The each personal identification information extracted is determined as personal identification information set.
In some embodiments, the facial image that the video frame acquired in the identification includes, obtains identified people
The characteristic of face image, including:By acquired video frame input convolutional neural networks trained in advance, video frame packet is obtained
The characteristic of the facial image included, wherein the convolutional neural networks are used to characterize the people that video frame includes with video frame
The correspondence of the characteristic of face image.
In some embodiments, the first acquisition unit includes:First acquisition module is configured to obtain target user
User information;Second acquisition module is configured to be based on the user information, obtains the concern people of target user's concern
The information of object.
The third aspect, the embodiment of the present application provide a kind of server, which includes:One or more processors;
Storage device, for storing one or more programs;When one or more programs are executed by one or more processors so that one
A or multiple processors realize the method as described in any realization method in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
The method as described in any realization method in first aspect is realized when computer program is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating information, by obtaining target user's concern first
The information for paying close attention to personage, then matches the information for paying close attention to personage with the personal identification information set of acquisition, and then obtain
Target person identification information ultimately produces and the relevant recommendation information of target video.Wherein, personal identification information is in advance to mesh
The video frame for marking video carries out the identification information for the personage that the obtained facial image of recognition of face characterizes.It is generated to improve
With the specific aim of the relevant recommendation information of target video.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating information of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for generating information of the application;
Fig. 4 is the flow chart according to another embodiment of the method for generating information of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for generating information of the application;
Fig. 6 is adapted for the structural schematic diagram of the computer system of the server for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the method for generating information that can apply the embodiment of the present application or the device for generating information
Exemplary system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal equipment 101,102,103, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed on terminal device 101,102,103, such as video playing application,
Web browser applications etc..
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard
Can be the various electronic equipments that there is display screen and support video playing, including but not limited to smart mobile phone, tablet when part
Computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic
Image expert's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, move
State image expert's compression standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal is set
Standby 101,102,103 when being software, may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or
Software module (such as providing the software or software module of Distributed Services), can also be implemented as single software or software mould
Block.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as to being played on terminal device 101,102,103
Video provides the background video recommendation server supported.The concern that background video recommendation server can pay close attention to the user of acquisition
The data such as information, the personal identification information of personage are handled, and by handling result (such as the relevant letter of video with recommendation
Breath) feed back to terminal device.
It should be noted that the method for generating information that the embodiment of the present application is provided generally is held by server 105
Row, correspondingly, the device for generating information is generally positioned in server 105.
It should be noted that server can be hardware, can also be software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server can also be implemented as.It, can when server is software
To be implemented as multiple softwares or software module (such as providing the software or software module of Distributed Services), can also realize
At single software or software module.It is not specifically limited herein.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the flow of one embodiment of the method for generating information according to the application is shown
200.The method for being used to generate information, includes the following steps:
Step 201, the information of the concern personage of target user's concern is obtained.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for generating information can lead to
Wired connection mode or radio connection are crossed from information long-range or from the local concern personage for obtaining target user's concern.
Wherein, above-mentioned target user can be that (such as certain is regarded by user in some pre-set user list or some user set
The member of frequency website), can also be to meet certain conditions (such as user of the video content in certain browsed video website)
User.The information of above-mentioned concern personage can be the information with the relevant personage of the personal like of target user, or from target
The information etc. of the related person of the viewing record extraction of user.As an example it is supposed that being had recorded in the personal information of target user
The personal preference of target user be football, then target user concern concern personage information can be certain soccer stars title,
The information such as nationality.
Step 202, the corresponding personal identification information set of target video is obtained.
In the present embodiment, above-mentioned executive agent can by wired connection mode or radio connection from long-range or
The corresponding personal identification information set of target video is obtained from local.Wherein, target video can be it is pre-set some regard
Video (such as video of certain class films and television programs of certain video website offer) in frequency list or some video collection.Above-mentioned personage
Identification information set can be the set of the advance information that correspondence is established with above-mentioned target video.As an example, target regards
Frequency is film, and the personal identification information in personal identification information set is the take part in a performance title of performer of the film, photo, date of birth
The information such as phase.
In the present embodiment, personal identification information can be to be carried out obtained by recognition of face to the video frame of target video in advance
The identification information of the personage of the facial image characterization arrived.Wherein, recognition of face is that the facial feature information based on people carries out identity
A kind of biological identification technology of identification.Various algorithms can be utilized (such as recognizer based on human face characteristic point, to be based on whole picture
The recognizer of facial image, the recognizer based on template, the algorithm etc. being identified based on neural network) carry out face knowledge
Not.
In some optional realization methods of the present embodiment, above-mentioned executive agent or other electronic equipments can lead in advance
It crosses following steps and establishes task identification information aggregate:
First, the video frame of target video is obtained.Wherein, the video frame of acquisition can be all videos frame of target video
Or partial video frame (such as video frame of certain section of reproduction time broadcasting in target video).
Then, it identifies the facial image that acquired video frame includes, obtains the characteristic of identified facial image
According to.Characteristic can be used for characterizing textural characteristics, color characteristic, the shape feature etc. of facial image.Here it is possible to using more
Feature extracting method is planted to extract the characteristic of facial image.Such as the method extraction people based on Gaussian filter may be used
The contour feature data of face, or (Scale-invariant feature transform, Scale invariant are special using such as SIFT
Sign transformation) etc. feature extraction algorithms extract characteristic point, or that the face trained based on machine learning method may be used is special
It levies extraction model and carries out characteristic extraction characteristic.The feature extracted can be with feature vector, binaryzation characteristic value
The forms such as set indicate.
It is alternatively possible to by acquired video frame input convolutional neural networks trained in advance, obtaining video frame includes
Facial image characteristic.Wherein, convolutional neural networks are used to characterize the facial image that video frame includes with video frame
Characteristic correspondence.The characteristic extracted using convolutional neural networks can be facial image by repeatedly volume
The characteristic image that product operation and pond are handled.The training sample of above-mentioned convolutional neural networks may include multiple sample faces
The mark characteristic of each sample facial image in image and multiple sample facial images further utilizes machine learning side
Method, it is using each sample facial image in multiple sample facial images as input, the sample facial image of input is corresponding
Characteristic is marked as output, training obtains above-mentioned convolutional neural networks.
Above-mentioned convolutional neural networks can be trained to obtain to the convolutional neural networks of initialization.The convolution of initialization
Neural network can be unbred convolutional neural networks or the convolutional neural networks that training is not completed, the convolution god of initialization
Each layer through network can be provided with initial parameter, and parameter can be adjusted constantly in the training process of convolutional neural networks
It is whole.In this way, video frame can be input to the input side of convolutional neural networks, then in turn through each in convolutional neural networks
The processing of layer, the characteristic for the facial image for including from the outlet side output video frame of convolutional neural networks.
Subsequently, the face characteristic data in obtained characteristic and preset face characteristic data acquisition system are carried out
Matching, obtains at least one target face characteristic.Here it is possible to calculate obtained spy using the algorithm for calculating similarity
The similarity for levying each of data and face characteristic data acquisition system face characteristic, will be greater than the similarity of similarity threshold
Corresponding face characteristic data are determined as target face characteristic.The algorithm of above-mentioned calculating similarity can include but is not limited to
Following at least one:Cosine similarity algorithm, Pearson correlation coefficients algorithm etc..
Finally, corresponding based on each target face characteristic in preset at least one target face characteristic
Personal identification information determines personal identification information set.Here, the face characteristic data in above-mentioned face characteristic data acquisition system can
With the personal identification information for pre-establishing correspondence, to be based on correspondence, above-mentioned at least one target person can be obtained
The corresponding personal identification information of target face characteristic in face characteristic.It should be noted that can will be obtained
Each personal identification information is determined as personal identification information set, or from each personal identification information obtained, extraction
Meet the people of preset condition (such as time for occurring in target video of facial image of the personage of personal identification information characterization)
Object identification information is as personal identification information set.
In some optional realization methods of the present embodiment, above-mentioned executive agent or other electronic equipments can be according to such as
Lower step determines personal identification information set:
Firstly, for the corresponding people of each target face characteristic in above-mentioned at least one target face characteristic
Object identification information, video frame and target where the facial image characterized based in the target video, personal identification information are regarded
The frame per second of frequency determines the duration that the facial image of personal identification information characterization occurs in target video.When in response to determining
It is long to be more than preset time threshold, extract the personal identification information.
Then, each personal identification information extracted is determined as personal identification information set.
Believe as an example it is supposed that each video frame that several sections of videos in target video include includes certain character recognition and label
The facial image for ceasing characterization, then for every section of video in above-mentioned several sections of videos, by the video frame included by this section of video
The frame per second of quantity divided by target video, the result of gained are the duration of this section of video.Further each section be calculated is regarded
The duration of frequency is added, and acquired results are the duration that the facial image of personal identification information characterization occurs in target video.
Step 203, it determines and whether there is target person identification information in personal identification information set.
In the present embodiment, the personal identification information set obtained based on step 202, above-mentioned executive agent can determine people
It whether there is target person identification information in object identification information set.Wherein, target person identification information characterization personage with it is upper
It is related to state concern personage.Here, it can refer to above-mentioned that the personage of target person identification information characterization is related to above-mentioned concern personage
Information and the personal identification information for paying close attention to personage are completely the same or partly consistent.
As an example it is supposed that the information of concern personage is " king XX ", certain personal identification information is " king XX ", it is determined that the people
Object identification information is target person identification information.Or, it is assumed that the information for paying close attention to personage is " king XX, XX combine member ", someone
Object identification information is " Lee XX, XX combine member ", it is determined that the personal identification information is target person identification information.
Step 204, exist in response to determining, generate and the relevant recommendation information of target video.
In the present embodiment, above-mentioned executive agent can there are target persons in personal identification information set in response to determining
Identification information generates and the relevant recommendation information of target video.Wherein, with the relevant recommendation information of target video may include but
It is not limited to following at least one:Title, brief introduction, picture, chained address of target video etc..
Optionally, recommendation information can be sent to above-mentioned target user by above-mentioned executive agent after generating recommendation information
Used terminal device.
It is a signal according to the application scenarios of the method for generating information of the present embodiment with continued reference to Fig. 3, Fig. 3
Figure.In the application scenarios of Fig. 3, the information 302 that server 301 obtains the concern personage of target user's concern first (is paid close attention to
The title " king XX " of personage).Wherein, target user is the member of certain video website.Then, server 301 obtains target video
(film " XXXXXXX ") corresponding personal identification information set 303.Subsequently, server 301 determines personal identification information set
There is the target person identification information 3031 for including " king XX " in 303.Finally, server 301 generates and film " XXXXXXX " phase
The recommendation information 304 of pass.Wherein, recommendation information includes the chained address of movie name, the title of protagonist, video.
The method that above-described embodiment of the application provides, the letter of the concern personage by obtaining target user's concern first
Breath, then matches the information for paying close attention to personage with the personal identification information set of acquisition, and then obtains target person mark
Information ultimately produces and the relevant recommendation information of target video.Wherein, personal identification information is in advance to the video of target video
Frame carries out the identification information of the personage of the obtained facial image characterization of recognition of face.To improve generation and target video phase
The specific aim of the recommendation information of pass.
With further reference to Fig. 4, it illustrates the flows 400 of another embodiment of the method for generating information.The use
In the flow 400 for the method for generating information, include the following steps:
Step 401, the user information of target user is obtained.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for generating information can lead to
Wired connection mode or radio connection are crossed from user information long-range or from local acquisition target user.Wherein, target
User can be the user (such as member of certain video website) in some pre-set user list or some user set,
Can also be the user for meeting certain conditions (such as video content in certain browsed video website).The user of target user believes
Breath can include but is not limited to following at least one:User's portrait information, the viewing record of target user, the target of target user
The geographical location information etc. of user.
Step 402, it is based on user information, obtains the information of the concern personage of target user's concern.
In the present embodiment, the user information obtained based on step 401, above-mentioned executive agent can be based on user information,
Obtain the information of the concern personage of target user's concern.The information of above-mentioned concern personage can be the personal like with target user
The information of relevant personage, or the information etc. for watching the related person that record extracts from target user.
Optionally, user information can be user's portrait information, and above-mentioned executive agent can be according to user's portrait information, really
The information of the concern personage of the user that sets the goal concern.As an example, above-mentioned executive agent extracts in can drawing a portrait information from user
The age of target user, the bright stars and other information liked, liked.Above-mentioned executive agent can be by the name for the star that target user likes
Claim the information for being determined as paying close attention to personage.Above-mentioned executive agent can also will be according to information such as age, the hobbies of target user
Age bracket belonging to target user and/or the people information of interest of the other users with hobby identical as target user determine
To pay close attention to the information of personage.
Optionally, above-mentioned executive agent can also determine the concern of target user's concern according to the historical record of target user
The information of personage.As an example, when historical record is history viewing record, above-mentioned executive agent can obtain target user's sight
The related personal image information's (such as protagonist of movie and television play) for the video seen.The related personal image information of acquisition is determined as target user
The information of the concern personage of concern.When historical record is that historical search records, above-mentioned executive agent can also obtain target use
The historical search at family records, and the people information during historical search is recorded is determined as the letter of the concern personage of target user's concern
Breath.
Step 403, the corresponding personal identification information set of target video is obtained.
In the present embodiment, above-mentioned executive agent can by wired connection mode or radio connection from long-range or
The corresponding personal identification information set of target video is obtained from local.Wherein, target video can be it is pre-set some regard
Video (such as video of certain class films and television programs of certain video website offer) in frequency list or some video collection.Above-mentioned personage
Identification information set is the set of the advance information that correspondence is established with above-mentioned target video.
In the present embodiment, personal identification information can be to be carried out obtained by recognition of face to the video frame of target video in advance
The identification information of the personage of the facial image characterization arrived.
Step 404, it determines and whether there is target person identification information in personal identification information set.
In the present embodiment, the personal identification information set obtained based on step 403, above-mentioned executive agent can determine people
It whether there is target person identification information in object identification information set.Wherein, target person identification information characterization personage with it is upper
It is related to state concern personage.Here, above-mentioned executive agent can will or part completely the same with the information of above-mentioned concern personage it is consistent
Personal identification information be determined as target person identification information, i.e. the personage and above-mentioned concern people of target person identification information characterization
Object is related.
Step 405, exist in response to determining, generate and the relevant recommendation information of target video.
In the present embodiment, above-mentioned executive agent can there are target persons in personal identification information set in response to determining
Identification information generates and the relevant recommendation information of target video.Wherein, with the relevant recommendation information of target video may include but
It is not limited to following at least one:Title, brief introduction, picture, chained address etc..
Above-mentioned steps 403, step 404, step 405 respectively with step 202, step 203, the step in previous embodiment
204 is consistent, and the description above with respect to step 202, step 203, step 204 is also applied for step 403, step 404, step 405,
Details are not described herein again.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, the method for generating information in the present embodiment
Flow 400 the step of highlighting the user information based on target user, obtaining the information of the concern personage of target user's concern.
The scheme of the present embodiment description can more accurately obtain the information of the concern personage of target user's concern as a result, thus into
One step improves the specific aim of generation and the relevant recommendation information of target video.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating letter
One embodiment of the device of breath, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the device 500 for generating information of the present embodiment includes:First acquisition unit 501, configuration are used
In the information for the concern personage for obtaining target user's concern;It is corresponding to be configured to acquisition target video for second acquisition unit 502
Personal identification information set, wherein personal identification information is to be carried out obtained by recognition of face to the video frame of target video in advance
Facial image characterization personage identification information;Determination unit 503, be configured to determine in personal identification information set whether
There are target person identification informations, wherein the personage of target person identification information characterization is related to concern personage;Generation unit
504, it is configured to exist in response to determining, generate and the relevant recommendation information of target video.
In the present embodiment, first acquisition unit 501 can be by wired connection mode or radio connection from remote
Journey or the information of the concern personage paid close attention to from local acquisition target user.Wherein, above-mentioned target user can be pre-set
User (such as member of certain video website) in some user list or some user set, can also be to meet certain conditions
The user of (such as video content in certain browsed video website).The information of above-mentioned concern personage can be and target user
The information of the relevant personage of personal like, or the information etc. for watching the related person that record extracts from target user.
In the present embodiment, second acquisition unit 502 can be by wired connection mode or radio connection from remote
Journey obtains the corresponding personal identification information set of target video from local.Wherein, target video can be it is pre-set certain
Video (such as video of certain class films and television programs of certain video website offer) in a list of videos or some video collection.It is above-mentioned
Personal identification information set is the set of the advance information that correspondence is established with above-mentioned target video.Personal identification information can be with
For the identification information for the personage that the obtained facial image of recognition of face characterizes is carried out to the video frame of target video in advance.
In the present embodiment, the personal identification information set obtained based on second acquisition unit 502, above-mentioned determination unit
503 can determine in personal identification information set with the presence or absence of target person identification information.Wherein, target person identification information table
The personage of sign is related to above-mentioned concern personage.Here, above-mentioned executive agent can be by the information complete one with above-mentioned concern personage
It causes or the consistent personal identification information in part is determined as target person identification information, the i.e. personage of target person identification information characterization
It is related to above-mentioned concern personage.
In the present embodiment, generation unit 504 can there are target persons in personal identification information set in response to determining
Identification information generates and the relevant recommendation information of target video.Wherein, with the relevant recommendation information of target video may include but
It is not limited to following at least one:Title, brief introduction, picture, chained address etc..
In some optional realization methods of the present embodiment, device 500 or other electronic equipments for generating information
Personal identification information set can be established as follows in advance:Obtain the video frame of target video;Identification is acquired to be regarded
The facial image that frequency frame includes obtains the characteristic of identified facial image;By obtained characteristic and preset
Face characteristic data acquisition system in face characteristic data matched, obtain at least one target face characteristic;It is based on
The corresponding personal identification information of target face characteristic in preset at least one target face characteristic, determines personage
Identification information set.
In some optional realization methods of the present embodiment, device 500 or other electronic equipments for generating information
Personal identification information set can be determined using following steps:For each target at least one target face characteristic
The corresponding personal identification information of face characteristic data, the facial image characterized based in the target video, personal identification information
The video frame at place and the frame per second of target video determine that the facial image of personal identification information characterization occurs in target video
Duration;In response to determining that duration is more than preset time threshold, the personal identification information is extracted;The each personage extracted is marked
Know information and is determined as personal identification information set.
In some optional realization methods of the present embodiment, device 500 or other electronic equipments for generating information
The characteristic of identified facial image can be obtained using following steps:By acquired video frame input training in advance
Convolutional neural networks, obtain the characteristic for the facial image that video frame includes, wherein convolutional neural networks for characterize regard
The correspondence of the characteristic for the facial image that frequency frame includes with video frame.
In some optional realization methods of the present embodiment, first acquisition unit 501 may include:First acquisition module
(not shown) is configured to obtain the user information of target user;Second acquisition module (not shown), is configured to
Based on user information, the information of the concern personage of target user's concern is obtained.
The device that above-described embodiment of the application provides obtains the pass of target user's concern by first acquisition unit 501
The information of personage is noted, the character recognition and label that then determination unit 503 obtains the information for paying close attention to personage and second acquisition unit 502 is believed
Breath set is matched, and then obtains target person identification information, is ultimately produced unit 504 and is generated and target video is relevant pushes away
Recommend information.Wherein, personal identification information is to carry out the obtained facial image of recognition of face to the video frame of target video in advance
The identification information of the personage of characterization.To improve the specific aim of generation and the relevant recommendation information of target video.
Below with reference to Fig. 6, it illustrates the computer systems 600 suitable for the server for realizing the embodiment of the present application
Structural schematic diagram.Server shown in Fig. 6 is only an example, should not be to the function and use scope band of the embodiment of the present application
Carry out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communications portion 609 from network, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer-readable medium either the two arbitrarily combines.Computer-readable medium for example can be --- but it is unlimited
In --- electricity, system, device or the device of magnetic, optical, electromagnetic, infrared ray or semiconductor, or the arbitrary above combination.It calculates
The more specific example of machine readable medium can include but is not limited to:Being electrically connected with one or more conducting wires, portable meter
Calculation machine disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory
(EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or
The above-mentioned any appropriate combination of person.In this application, can be any include computer-readable medium or storage program has
Shape medium, the program can be commanded the either device use or in connection of execution system, device.And in the application
In, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated, wherein
Carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not limited to electric
Magnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable Jie
Any computer-readable medium other than matter, the computer-readable medium can be sent, propagated or transmitted for being held by instruction
Row system, device either device use or program in connection.The program code for including on computer-readable medium
It can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned arbitrary conjunction
Suitable combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute on the user computer, partly execute, executed as an independent software package on the user computer,
Part executes or executes on a remote computer or server completely on the remote computer on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including LAN (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses
The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include first acquisition unit, second acquisition unit, determination unit and generation unit.Wherein, the title of these units is in certain situation
Under do not constitute restriction to the unit itself, for example, first acquisition unit is also described as " obtaining target user's concern
Concern personage information unit ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in server described in above-described embodiment;Can also be individualism, and without be incorporated the server in.It is above-mentioned
Computer-readable medium carries one or more program, when said one or multiple programs are executed by the server,
Make the server:Obtain the information of the concern personage of target user's concern;Obtain the corresponding personal identification information of target video
Set, wherein personal identification information is to carry out the obtained face figure of recognition of face to the video frame of the target video in advance
As the identification information of the personage of characterization;It determines and whether there is target person identification information in the personal identification information set,
In, the personage of the target person identification information characterization is related to the concern personage;In response to determine exist, generate with it is described
The relevant recommendation information of target video.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (12)
1. a kind of method for generating information, including:
Obtain the information of the concern personage of target user's concern;
Obtain the corresponding personal identification information set of target video, wherein personal identification information is in advance to the target video
Video frame carry out recognition of face obtained facial image characterization personage identification information;
It determines and whether there is target person identification information in the personal identification information set, wherein the target person mark
The personage of information representation is related to the concern personage;
Exist in response to determining, generates and the relevant recommendation information of the target video.
2. according to the method described in claim 1, wherein, the personal identification information set is established as follows in advance:
Obtain the video frame of target video;
The facial image that the acquired video frame of identification includes, obtains the characteristic of identified facial image;
Obtained characteristic is matched with the face characteristic data in preset face characteristic data acquisition system, obtain to
A few target face characteristic;
Based on the corresponding character recognition and label of target face characteristic in preset at least one target face characteristic
Information determines personal identification information set.
3. described to be based on preset at least one target face characteristic according to the method described in claim 2, wherein
In the corresponding personal identification information of target face characteristic, determine personal identification information set, including:
For the corresponding personage of each target face characteristic in preset at least one target face characteristic
Identification information, video frame where the facial image characterized based in the target video the, personal identification information and described
The frame per second of target video determines the duration that the facial image of personal identification information characterization occurs in the target video;It rings
Preset time threshold should be more than in determining duration, extract the personal identification information;
The each personal identification information extracted is determined as personal identification information set.
4. according to the method described in claim 2, wherein, the facial image that the video frame acquired in the identification includes obtains
The characteristic of the facial image identified, including:
By acquired video frame input convolutional neural networks trained in advance, the feature for the facial image that video frame includes is obtained
Data, wherein the convolutional neural networks are used to characterize the characteristic for the facial image that video frame includes with video frame
Correspondence.
5. according to the method described in one of claim 1-4, wherein the letter of the concern personage for obtaining target user's concern
Breath, including:
Obtain the user information of target user;
Based on the user information, the information of the concern personage of target user's concern is obtained.
6. a kind of device for generating information, including:
First acquisition unit is configured to obtain the information of the concern personage of target user's concern;
Second acquisition unit is configured to obtain the corresponding personal identification information set of target video, wherein personal identification information
For the identification information for the personage that the obtained facial image of recognition of face characterizes is carried out to the video frame of the target video in advance;
Determination unit is configured to determine in the personal identification information set with the presence or absence of target person identification information, wherein
The personage of the target person identification information characterization is related to the concern personage;
Generation unit is configured to exist in response to determining, generate and the relevant recommendation information of the target video.
7. device according to claim 6, wherein the personal identification information set is established as follows in advance:
Obtain the video frame of target video;
The facial image that the acquired video frame of identification includes, obtains the characteristic of identified facial image;
Obtained characteristic is matched with the face characteristic data in preset face characteristic data acquisition system, obtain to
A few target face characteristic;
Based on the corresponding character recognition and label of target face characteristic in preset at least one target face characteristic
Information determines personal identification information set.
8. device according to claim 7, wherein described to be based on preset at least one target face characteristic
In the corresponding personal identification information of target face characteristic, determine personal identification information set, including:
For the corresponding character recognition and label letter of each target face characteristic at least one target face characteristic
Breath, video frame and the target where the facial image characterized based in the target video, personal identification information are regarded
The frame per second of frequency determines the duration that the facial image of personal identification information characterization occurs in the target video;In response to true
Timing is long to be more than preset time threshold, extracts the personal identification information;
The each personal identification information extracted is determined as personal identification information set.
9. device according to claim 7, wherein the facial image that the video frame acquired in the identification includes obtains
The characteristic of the facial image identified, including:
By acquired video frame input convolutional neural networks trained in advance, the feature for the facial image that video frame includes is obtained
Data, wherein the convolutional neural networks are used to characterize the characteristic for the facial image that video frame includes with video frame
Correspondence.
10. according to the device described in one of claim 6-9, wherein the first acquisition unit includes:
First acquisition module is configured to obtain the user information of target user;
Second acquisition module is configured to be based on the user information, obtains the letter of the concern personage of target user's concern
Breath.
11. a kind of server, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-5.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein the program is realized when being executed by processor
Method as described in any in claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810235162.1A CN108446385A (en) | 2018-03-21 | 2018-03-21 | Method and apparatus for generating information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810235162.1A CN108446385A (en) | 2018-03-21 | 2018-03-21 | Method and apparatus for generating information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108446385A true CN108446385A (en) | 2018-08-24 |
Family
ID=63196050
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810235162.1A Pending CN108446385A (en) | 2018-03-21 | 2018-03-21 | Method and apparatus for generating information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108446385A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109241344A (en) * | 2018-08-31 | 2019-01-18 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling information |
CN109271557A (en) * | 2018-08-31 | 2019-01-25 | 北京字节跳动网络技术有限公司 | Method and apparatus for output information |
CN109523344A (en) * | 2018-10-16 | 2019-03-26 | 深圳壹账通智能科技有限公司 | Product information recommended method, device, computer equipment and storage medium |
CN109947988A (en) * | 2019-03-08 | 2019-06-28 | 百度在线网络技术(北京)有限公司 | A kind of information processing method, device, terminal device and server |
CN110909651A (en) * | 2019-11-15 | 2020-03-24 | 腾讯科技(深圳)有限公司 | Video subject person identification method, device, equipment and readable storage medium |
WO2020078299A1 (en) * | 2018-10-16 | 2020-04-23 | 华为技术有限公司 | Method for processing video file, and electronic device |
CN111259698A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111459587A (en) * | 2020-03-27 | 2020-07-28 | 北京三快在线科技有限公司 | Information display method, device, equipment and storage medium |
CN111666908A (en) * | 2020-06-09 | 2020-09-15 | 广州市百果园信息技术有限公司 | Interest portrait generation method, device and equipment for video user and storage medium |
CN113810735A (en) * | 2020-06-15 | 2021-12-17 | 西安诺瓦星云科技股份有限公司 | Program image generation method and device and service equipment system |
CN115086771A (en) * | 2021-03-16 | 2022-09-20 | 聚好看科技股份有限公司 | Video recommendation media asset display method, display device and server |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1760633A2 (en) * | 2005-09-02 | 2007-03-07 | Sony Corporation | Video processing apparatus |
CN102542249A (en) * | 2010-11-01 | 2012-07-04 | 微软公司 | Face recognition in video content |
CN103488764A (en) * | 2013-09-26 | 2014-01-01 | 天脉聚源(北京)传媒科技有限公司 | Personalized video content recommendation method and system |
CN105354543A (en) * | 2015-10-29 | 2016-02-24 | 小米科技有限责任公司 | Video processing method and apparatus |
CN105872717A (en) * | 2015-10-26 | 2016-08-17 | 乐视移动智能信息技术(北京)有限公司 | Video processing method and system, video player and cloud server |
CN105868684A (en) * | 2015-12-10 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Video information acquisition method and apparatus |
CN107784092A (en) * | 2017-10-11 | 2018-03-09 | 深圳市金立通信设备有限公司 | A kind of method, server and computer-readable medium for recommending hot word |
-
2018
- 2018-03-21 CN CN201810235162.1A patent/CN108446385A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1760633A2 (en) * | 2005-09-02 | 2007-03-07 | Sony Corporation | Video processing apparatus |
CN102542249A (en) * | 2010-11-01 | 2012-07-04 | 微软公司 | Face recognition in video content |
CN103488764A (en) * | 2013-09-26 | 2014-01-01 | 天脉聚源(北京)传媒科技有限公司 | Personalized video content recommendation method and system |
CN105872717A (en) * | 2015-10-26 | 2016-08-17 | 乐视移动智能信息技术(北京)有限公司 | Video processing method and system, video player and cloud server |
CN105354543A (en) * | 2015-10-29 | 2016-02-24 | 小米科技有限责任公司 | Video processing method and apparatus |
CN105868684A (en) * | 2015-12-10 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Video information acquisition method and apparatus |
CN107784092A (en) * | 2017-10-11 | 2018-03-09 | 深圳市金立通信设备有限公司 | A kind of method, server and computer-readable medium for recommending hot word |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109271557A (en) * | 2018-08-31 | 2019-01-25 | 北京字节跳动网络技术有限公司 | Method and apparatus for output information |
CN109241344A (en) * | 2018-08-31 | 2019-01-18 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling information |
CN109241344B (en) * | 2018-08-31 | 2021-11-26 | 北京字节跳动网络技术有限公司 | Method and apparatus for processing information |
CN109523344A (en) * | 2018-10-16 | 2019-03-26 | 深圳壹账通智能科技有限公司 | Product information recommended method, device, computer equipment and storage medium |
WO2020078299A1 (en) * | 2018-10-16 | 2020-04-23 | 华为技术有限公司 | Method for processing video file, and electronic device |
CN111259698B (en) * | 2018-11-30 | 2023-10-13 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111259698A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN109947988A (en) * | 2019-03-08 | 2019-06-28 | 百度在线网络技术(北京)有限公司 | A kind of information processing method, device, terminal device and server |
CN110909651A (en) * | 2019-11-15 | 2020-03-24 | 腾讯科技(深圳)有限公司 | Video subject person identification method, device, equipment and readable storage medium |
CN111310731A (en) * | 2019-11-15 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Video recommendation method, device and equipment based on artificial intelligence and storage medium |
CN111310731B (en) * | 2019-11-15 | 2024-04-09 | 腾讯科技(深圳)有限公司 | Video recommendation method, device, equipment and storage medium based on artificial intelligence |
CN110909651B (en) * | 2019-11-15 | 2023-12-26 | 腾讯科技(深圳)有限公司 | Method, device and equipment for identifying video main body characters and readable storage medium |
CN111459587A (en) * | 2020-03-27 | 2020-07-28 | 北京三快在线科技有限公司 | Information display method, device, equipment and storage medium |
CN111666908A (en) * | 2020-06-09 | 2020-09-15 | 广州市百果园信息技术有限公司 | Interest portrait generation method, device and equipment for video user and storage medium |
CN113810735A (en) * | 2020-06-15 | 2021-12-17 | 西安诺瓦星云科技股份有限公司 | Program image generation method and device and service equipment system |
CN115086771A (en) * | 2021-03-16 | 2022-09-20 | 聚好看科技股份有限公司 | Video recommendation media asset display method, display device and server |
CN115086771B (en) * | 2021-03-16 | 2023-10-24 | 聚好看科技股份有限公司 | Video recommendation media asset display method, display equipment and server |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108446385A (en) | Method and apparatus for generating information | |
CN108446390A (en) | Method and apparatus for pushed information | |
CN106326391B (en) | Multimedia resource recommendation method and device | |
CN108307240B (en) | Video recommendation method and device | |
CN108989882B (en) | Method and apparatus for outputting music pieces in video | |
CN110188719B (en) | Target tracking method and device | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN110278388A (en) | Show generation method, device, equipment and the storage medium of video | |
CN109086719A (en) | Method and apparatus for output data | |
CN108595628A (en) | Method and apparatus for pushed information | |
CN108345387A (en) | Method and apparatus for output information | |
CN109993150A (en) | The method and apparatus at age for identification | |
CN109829432A (en) | Method and apparatus for generating information | |
CN109582825A (en) | Method and apparatus for generating information | |
CN109857908A (en) | Method and apparatus for matching video | |
CN109271556A (en) | Method and apparatus for output information | |
CN108304067A (en) | System, method and apparatus for showing information | |
CN108446658A (en) | The method and apparatus of facial image for identification | |
CN108280200A (en) | Method and apparatus for pushed information | |
CN111897950A (en) | Method and apparatus for generating information | |
CN109862100A (en) | Method and apparatus for pushed information | |
CN108230033A (en) | For the method and apparatus of output information | |
CN109934142A (en) | Method and apparatus for generating the feature vector of video | |
CN113033677A (en) | Video classification method and device, electronic equipment and storage medium | |
CN109255036A (en) | Method and apparatus for output information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |