CN108471544A - A kind of structure video user portrait method and device - Google Patents

A kind of structure video user portrait method and device Download PDF

Info

Publication number
CN108471544A
CN108471544A CN201810262253.4A CN201810262253A CN108471544A CN 108471544 A CN108471544 A CN 108471544A CN 201810262253 A CN201810262253 A CN 201810262253A CN 108471544 A CN108471544 A CN 108471544A
Authority
CN
China
Prior art keywords
target
performer
video
user
video clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810262253.4A
Other languages
Chinese (zh)
Other versions
CN108471544B (en
Inventor
王程明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201810262253.4A priority Critical patent/CN108471544B/en
Publication of CN108471544A publication Critical patent/CN108471544A/en
Application granted granted Critical
Publication of CN108471544B publication Critical patent/CN108471544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The invention discloses a kind of structure video user portrait method and device, this method includes:The target video for obtaining target user, at least one video clip is divided by the target video;Determine the actor information in each video clip of the target video, and count and obtain the occurrence number of each target performer in the actor information, the weighted value of each target performer in the target video is calculated in the occurrence number based on each target performer;According to the weighted value of each target performer in the target video, the target user is calculated, numerical value is characterized to the hobby of each target performer, target user draws a portrait described in the hobby characterization numerical generation based on the target user to each target performer.The present invention is drawn a portrait by the user of performer's rank in video, realizes the purpose of the precision of user's portrait.

Description

A kind of structure video user portrait method and device
Technical field
The present invention relates to video recommendations technical fields, more particularly to a kind of structure video user portrait method and device.
Background technology
With the rapid development of internet, Internet video has become people and obtains the main of video information and entertainment information One of source.And number of videos is in rapid growth, major video website or client in order to improve the experience effect of user, Corresponding video recommendations are often carried out to user according to the favorable rating of video user.
It is to establish user's portrait to recommend one of the key technology used when video information to user, and so-called user's portrait is By the behavior property of user (such as browsing video or the record for watching video) and primary attribute (such as the basic letter of user Breath) it is analyzed, user's overall picture is taken out, the base of the big datas applications such as personalized recommendation, automation marketing can be supported Plinth mode.The common user's representation data of current video industry is mainly that the granularity of the film rank of user's viewing carries out, example Such as when some film includes performer A, performer B and performer C, if some user is just to watch this shadow because liking performer C Piece carries out structure user's portrait, the result of obtained user's portrait if it is according to the granularity of film rank in this case It is that this user likes performer A, performer B and performer C, the true hobby of user can not be accurately reflected, video can be caused to push away Recommend the inaccuracy of result.
Invention content
It is directed to the above problem, the present invention provides a kind of structure video user portrait method and device, by being drilled in video User's portrait of member's rank, realizes the purpose of the precision of user's portrait.
To achieve the goals above, the present invention provides following technical solutions:
A kind of structure video user portrait method, including:
The target video for obtaining target user, at least one video clip is divided by the target video;
It determines the actor information in each video clip of the target video, and counts and obtain in the actor information The occurrence number of each target performer, the occurrence number based on each target performer are calculated in the target video respectively The weighted value of a target performer;
According to the weighted value of each target performer in the target video, the target user is calculated to each target The hobby of performer characterizes numerical value, and target is used described in the hobby characterization numerical generation based on the target user to each target performer It draws a portrait at family.
Preferably, the target video for obtaining target user, at least one piece of video is divided by the target video Section, including:
The target video of target user to getting parses, and obtains the actor information in the target video, In, the actor information includes at least one target performer;
The appearance duration of each target performer in the target video is obtained, and according to each target performer Appearance duration the target video is marked;
Target video after label is divided, at least one video clip is obtained.
Preferably, the actor information in each video clip of the determination target video, and count obtain it is described The occurrence number of each target performer in actor information, the occurrence number based on each target performer are calculated described The weighted value of each target performer in target video, including:
Determine the actor information in each video clip of the target video;
According to the actor information in each video clip, statistics obtains corresponding target performer in each video clip Occurrence number;
According to the number that target performer in each video clip occurs, whole mesh of the target video are calculated Mark the appearance total degree of performer;
The ratio for calculating total degree and the target performer occurrence number that the target complete performer occurs, by the ratio Value is denoted as the weighted value of the target performer.
Preferably, the weighted value according to each target performer in the target video is calculated the target and uses Family characterizes numerical value to the hobby of each target performer, the hobby characterization numerical value life based on the target user to each target performer It draws a portrait at the target user, including:
Detection obtains viewing behavior record of the target user to each video clip, and remembers according to the viewing behavior Record determines time watched of the target user to each video clip;
Calculate multiplying between the time watched of each video clip and the weighted value of the corresponding target performer of the video clip The product is denoted as the target user and characterizes numerical value to the hobby of the target performer by product;
Target user draws a portrait described in hobby characterization numerical generation according to the target user to each performer.
Preferably, when the viewing behavior record includes that the target user records and returns to the fast-forward play of video clip When putting record, the detection obtains viewing behavior record of the target user to each video clip, and according to the viewing Behavior record determines time watched of the target user to each video clip, including:
If the first video clip of the target user couple carries out fast-forward play, by the broadcasting of first video clip time Number scale is zero, if the second video clip of the target user couple plays back, by the broadcasting time of second video clip Add one, statistics obtains time watched of the target user to each video clip successively.
A kind of structure video user portrait device, including:
The target video is divided at least one video by acquisition module, the target video for obtaining target user Segment;
Determining module, the actor information in each video clip for determining the target video, and count and obtain institute Institute is calculated in the occurrence number for stating each target performer in actor information, the occurrence number based on each target performer State the weighted value of each target performer in target video;
The target is calculated for the weighted value according to each target performer in the target video in generation module User characterizes numerical value to the hobby of each target performer, and numerical value is characterized to the hobby of each target performer based on the target user Generate target user's portrait.
Preferably, the acquisition module includes:
Resolution unit, the target video for the target user to getting parse, and obtain in the target video Actor information, wherein the actor information includes at least one target performer;
Marking unit, the appearance duration for obtaining each target performer in the target video, and according to institute The target video is marked in the appearance duration for stating each target performer;
Division unit obtains at least one video clip for dividing the target video after label.
Preferably, the determining module includes:
Determination unit, the actor information in each video clip for determining the target video;
Statistic unit, for according to the actor information in each video clip, statistics to obtain in each video clip The occurrence number of corresponding target performer;
First computing unit, the number for being occurred according to target performer in each video clip, is calculated institute State the appearance total degree of the target complete performer of target video;
Second computing unit goes out occurrence for calculating the total degree that the target complete performer occurs with the target performer The ratio is denoted as the weighted value of the target performer by several ratio.
Preferably, the generation module includes:
Detection unit, for detecting the viewing behavior record for obtaining the target user to each video clip, and foundation The viewing behavior record determines time watched of the target user to each video clip;
Third computing unit, the time watched target performer corresponding with the video clip for calculating each video clip Weighted value between product, the product is denoted as the target user, numerical value is characterized to the hobby of the target performer;
Generation unit, for target user described in the hobby characterization numerical generation according to the target user to each performer Portrait.
Preferably, the detection unit is specifically used for:
When the viewing behavior record includes fast-forward play record and playback of the target user to video clip When, if the first video clip of the target user couple carries out fast-forward play, the broadcasting time of first video clip is remembered It is zero, if the second video clip of the target user couple plays back, the broadcasting time of second video clip is added one, Statistics obtains time watched of the target user to each video clip successively.
Compared to the prior art, the present invention by the target video of watching target user according to comprising actor information Video clip division is carried out, then by the way that hobby table of the target user to each performer has been calculated after calculating performer's weighted value Numerical value is levied, obtained characterization user by actor information accurate calculation characterizes numerical value, Neng Goucong to the hobby of performer's fancy grade The fancy grade to some performer is preferably embodied in performer's rank, that is, forms the video user in performer's level Precise positioning, compared with the prior art to some video entirety count, improve user portrait precision.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow diagram for structure video user portrait method that the embodiment of the present invention one provides;
Fig. 2 is a kind of flow diagram of target video division methods provided by Embodiment 2 of the present invention;
Fig. 3 is a kind of flow diagram of user's portrait generation method provided by Embodiment 2 of the present invention;
Fig. 4 is a kind of structural schematic diagram for structure video user portrait device that the embodiment of the present invention three provides.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Embodiment one
It, can referring to the flow diagram that Fig. 1 is a kind of structure video user portrait method that the embodiment of the present invention one provides To include the following steps:
S11, the target video for obtaining target user, at least one video clip is divided by the target video;
The video that the user that target video refers to watched, and the embodiment of the present invention is to carry out user's picture based on performer's rank As so be the video for including personage with the target video of statistical analysis meaning, if being similar to of watching of target user 《Animal World》The video that this no personage occurs does not have statistical analysis meaning then, cannot be identified as target video.When The a certain number of target videos of statistical analysis are needed before generating user's portrait to target user, essence could be carried out to target user Quasi- portrait, to recommend more accurate video information for it.It is to meeting statistical magnitude in various embodiments of the present invention Each target video analyzed, only illustrated with the analytic process of one of target video, this method has logical Other target videos are equally applicable with property.
Target user refers to that the video user of video user portrait to be built, the user often can watch or access phase The video answered.
When being divided to target video, mainly there is duration by actor information and each performer and divide , i.e., the period that performer in target video occurs is identified by recognition of face mode, the time occurred according to performer Entire video is divided into multiple video clips by point, and each video clip corresponds to one or more performer.After dividing in this way Video clip has corresponded to each performer, i.e., carrying out user's portrait from the angle of performer can make the fine granularity of film thinner so that Recommendation results are more accurate.This part will be specifically described in another embodiment of the invention.
S12, it determines actor information in each video clip of the target video, and counts and obtain the actor information In each target performer occurrence number, the target video is calculated in the occurrence number based on each target performer In each target performer weighted value;
The content and corresponding performer presented due to each video clip is different, to realize and be used structure in performer's rank The portrait at family needs the actor information in clearly each video clip, and actor information here characterizes the main of target video The protagonist taken part in a performance on performer, that is, ordinary meaning, this is because supporting role in different video or other in addition to protagonist Personnel are less fixed, and often the appearance in each video is also more dispersed, and statistical significance is poor, and user is come It says, the more of concern are also the actor information acted the leading role, also poor for other performer's attention rates, so implementing in the present invention At least one target performer that actor information in example in each video clip includes, target performer, which as mainly takes part in a performance, to drill Member.
In order to precisely carry out the generation of user's portrait, need to carry out weighted value tax for each target performer in actor information Value, and the present invention is to carry out weight assignment based on performer's occurrence number.
It is specifically as follows:
Determine the actor information in each video clip of the target video;
According to the actor information in each video clip, statistics obtains corresponding target performer in each video clip Occurrence number, wherein the actor information includes the target performer;
According to the number that target performer in each video clip occurs, whole mesh of the target video are calculated Mark the appearance total degree of performer;
The ratio for calculating total degree and the target performer occurrence number that the target complete performer occurs, by the ratio Value is denoted as the weighted value of the target performer.
For example, several segments have been split out according to the departure of performer first when processing target video, It is primary in a segment appearance for each performer, it is denoted as a number, it is assumed that there are two video clips altogether, wherein segment 1 It is the dialogue of performer A and performer B, segment 2 is the performance of performer A and performer C, then the total degree that all performers occur is 4, is drilled The number that member A occurs is 2, and the number that performer C occurs is 1, then weights=4/2=2 of performer A, weights=4/1=of performer C 4。
S13, according to the weighted value of each target performer in the target video, the target user is calculated to each The hobby of target performer characterizes numerical value, mesh described in the hobby characterization numerical generation based on the target user to each target performer Mark user's portrait.
Because target user is different to the fancy grade of each target performer, react when in the viewing to target video, Then the number for the viewing to each video clip is also possible to difference, for example some video clip is the favorite performer of user, The video user may watch the segment again after watching the segment by operations such as playback, if corresponding each regard Frequency segment user does not like, it is possible to by forwarding operation without viewing.By to the corresponding viewing time of each video clip Number is counted the fancy grade that can more embody user to performer, compared to the statistics precision higher of traditional entire film.
It is then based on the time watched and the weighted value of performer, target user adopts the fancy grade of each target performer It is indicated with hobby characterization numerical value, can more objectively describe hobby journey of the target user to each target performer in this way Degree.So as to generate user portrait of the target user in performer's hobby level.
For example, also assuming that user has viewed segment 11 time with the example in S12, segment 2 is had viewed twice, then to drilling The hobby value of member C is 4*2=8.User draws a portrait, and is by the behavior property of user (such as browsing video or the note for watching video Record) and primary attribute (such as essential information of user) analyze, take out user's overall picture.According to user to performer's Hobby value carries out user's portrait, i.e. user portrait can reflect fancy grade of the user to each performer, generate user and draw The process of picture is exactly to be analyzed the behavioural information to the fancy grade of each user as user, has taken out one The overall picture of user.
Technical solution disclosed in one through the embodiment of the present invention, by the target video of watching target user according to packet The actor information contained carries out video clip division, then by the way that target user has been calculated to each after calculating performer's weighted value The hobby of performer characterizes numerical value, and having obtained characterization user by actor information accurate calculation characterizes the hobby of performer's fancy grade Numerical value can preferably embody the fancy grade to some performer from performer's rank, that is, form in performer's level On video user precise positioning, compared with the prior art to some video entirety count, improve user portrait Precision.
Embodiment two
It, will be in conjunction with specific in the embodiment of the present invention two with reference to a kind of video broadcasting method that the embodiment of the present invention one provides Application scenarios further illustrate this method, and be that a kind of target video provided by Embodiment 2 of the present invention divides referring to Fig. 2 Flow diagram, the process include:
S111, target user to getting target video parse, obtain performer's letter in the target video Breath, wherein the actor information includes at least one target performer;
S112, the appearance duration for obtaining each target performer in the target video, and according to each mesh The target video is marked in the appearance duration of mark performer;
S113, the target video after label is divided, obtains at least one video clip.
The film X that video user was watched is obtained first, by the performer that mainly takes part in a performance in film X by way of recognition of face The period of appearance is identified, and entire film X is divided into multiple video clips, each video by the time point occurred according to performer Segment corresponds to one or more target performer.
For example, the target performer in the actor information of film X is performer A, performer B, performer C, occur to each performer Duration be marked, and the target video after label is divided can obtain following video clip:
Then, a kind of user's portrait generation method is additionally provided in embodiments of the present invention, referring to Fig. 3, including:
S131, detection obtain viewing behavior record of the target user to each video clip, and according to the viewing Behavior record determines time watched of the target user to each video clip;
S132, it calculates between the time watched and the weighted value of the corresponding target performer of the video clip of each video clip Product, the product is denoted as the target user, numerical value is characterized to the hobby of the target performer;
Target user draws a portrait described in S133, the hobby characterization numerical generation according to the target user to each performer.
In S131 count time watched when, need to be recorded according to the viewing behavior of user, for example, F.F. or return Put, certainly this be two kinds provided in this embodiment with specific reference to one of SS, such as watching time plus one It can be weighed from the angle of slow play, belong to the invention thought of the present invention.When the viewing behavior record includes the mesh When marking fast-forward play record and playback of the user to video clip, when counting time watched, if the target user couple First video clip carries out fast-forward play, then the broadcasting time of first video clip is denoted as zero, if the target user Second video clip is played back, then the broadcasting time of second video clip is added one, statistics obtains the mesh successively Mark time watched of the user to each video.
For example, user when watching film X, without F.F. or playback operation, is equivalent to complete to entire film Ground has viewed one time, then the number watched segment 1,2,3 and 4 is equal.If user regards some by forwarding operation Frequency segment has carried out F.F., then will subtract 1 for the 0 either corresponding time watched of the segment to the time watched of the segment, if Video user watches certain some segment repeatedly, then corresponding time watched can then increase, still by taking film X as an example, Assuming that the number that corresponding segment 1,2,3 and 4 is watched is as follows:
Segment #1:Viewing 1 time
Segment #2:Viewing 3 times
Segment #3:Viewing 1 time
Segment #4:Viewing 1 time
The number counts to obtain according to the F.F. or playback of user, is then weighed according to time watched and performer Hobby characterization numerical value is calculated in weight values.
For example, by taking film X as an example, occur performer A in segment 1 and be denoted as 1 person-time, occurs performer's B notes in segment 2 It is 1 person-time, occurs performer A and B in segment 3 and be denoted as 2 person-times, occur performer C in segment 4 and be denoted as 1 person-time, then all performers Total degree=1+1+2+1=5 of appearance, the number that performer A occurs are 2 times, and the number that performer B occurs is 2 times, and performer C occurs Number be 1 time.
Then weights=5/2=2.5 of performer A;
Weights=5/2=2.5 of performer B;
Weights=5/1=5 of performer C.
Then hobby value of the user to performer is calculated, which performer occurs hobby characterization numerical value=user of performer Video clip time watched * performer's weighted values.
Then, video user characterizes numerical value=(1+1) * 2.5=5 to the hobby of performer A;
Video user characterizes numerical value=(3+1) * 2.5=10 to the hobby of performer B;
Video user characterizes numerical value=1*5=5 to the hobby of performer C;
It is therefore shown that the video user more likes performer C.
It is last to be drawn a portrait according to obtained hobby characterization numerical generation target user, that is, retouched in the target user draws a portrait Fancy grade of the user to each performer is stated.For example, in the above example, what is embodied in user's portrait is that user prefers to drill Member C.
It is corresponding, can also include in embodiments of the present invention:
It is drawn a portrait according to the target user, meets the video letter that the target user draws a portrait to target user push Breath.
The namely higher performer of hobby characterization numerical value is bigger for the attraction of this target user, can be based on user The video frequency program that this hobby of drawing a portrait value preferential recommendation user favorite actor takes part in a performance.
Specifically for example, film《Rush Hour》Protagonist be Cheng Long and Chris's Plutarch, performer's Zhang Jingchu conducts Supporting role.But for the bean vermicelli of Zhang Jingchu, the video clip of Zhang Jingchu appearance can be preferred, can be watched repeatedly, for There is no the segment of Zhang Jingchu, it, can be by coming in skip soon as long as not influencing collection of drama.For this viewing behavior, this user's Portrait is to like Zhang Jingchu, rather than Cheng Long and Chris's Plutarch.Correspondingly, the result recommended the user should be Zhang Jingchu The video frequency program of protagonist, rather than the video frequency program of Cheng Long.
It can also equally be drawn a portrait according to user, carry out user management.For example, user of some video platform according to its client It is B that portrait, which obtains the favorite performers of target user A, and includes performer B in its mouthpiece of the platform or signing artist, then may be used Associated video member's privilege information comprising viewing performer B is pushed to target user A, target user A is promoted to become the platform Member user, use user to draw a portrait in this way and screen the success rate of potential member user and will greatly improve.
Technical solution disclosed according to embodiments of the present invention two, when by the target video of user's viewing according to the appearance of performer Between point carried out divide obtained multiple video clips, then to F.F. of each video clip based on user or playback operation Time watched is counted, the hobby characterization numerical value for obtaining target user to performer is finally calculated, number is characterized according to the hobby Value can carry out user's portrait, and then carry out video recommendations to the video user.The piece that oneself favorite performer occurs in user Duan Fanfu viewings are natural user behaviors, and the performer user not liked is tended not to watch this performer appearance repeatedly Segment, in addition in the case where not influencing plot can direct F.F. skip.This viewing behavior of user is to the direct of hobby Reaction can calculate user to the fancy grade of some performer, Jin Erti by the analysis to user's F.F. and playback behavior The precision of high user's portrait.
Embodiment three
It is corresponding with the structure video user portrait method disclosed in the embodiment of the present invention one and embodiment two, it is of the invention Embodiment three additionally provides a kind of structure video user portrait device, and referring to Fig. 4, which may include:
The target video is divided at least one regard by acquisition module 10, the target video for obtaining target user Frequency segment;
Determining module 20, the actor information in each video clip for determining the target video, and count and obtain The occurrence number of each target performer in the actor information, the occurrence number based on each target performer are calculated The weighted value of each target performer in the target video;
The mesh is calculated for the weighted value according to each target performer in the target video in generation module 30 It marks user and numerical value is characterized to the hobby of each target performer, number is characterized to the hobby of each target performer based on the target user Value generates target user's portrait.
Optionally, the acquisition module includes:
Resolution unit, the target video for the target user to getting parse, and obtain in the target video Actor information, wherein the actor information includes at least one target performer;
Marking unit, the appearance duration for obtaining each target performer in the target video, and according to institute The target video is marked in the appearance duration for stating each target performer;
Division unit obtains at least one video clip for dividing the target video after label.
Optionally, the determining module includes:
Determination unit, the actor information in each video clip for determining the target video;
Statistic unit, for according to the actor information in each video clip, statistics to obtain in each video clip The occurrence number of corresponding target performer;
First computing unit, the number for being occurred according to target performer in each video clip, is calculated institute State the appearance total degree of the target complete performer of target video;
Second computing unit goes out occurrence for calculating the total degree that the target complete performer occurs with the target performer The ratio is denoted as the weighted value of the target performer by several ratio.
Optionally, the generation module includes:
Detection unit, for detecting the viewing behavior record for obtaining the target user to each video clip, and foundation The viewing behavior record determines time watched of the target user to each video clip;
Third computing unit, the time watched target performer corresponding with the video clip for calculating each video clip Weighted value between product, the product is denoted as the target user, numerical value is characterized to the hobby of the target performer;
Generation unit, for target user described in the hobby characterization numerical generation according to the target user to each performer Portrait.
Optionally, the detection unit is specifically used for:
When the viewing behavior record includes fast-forward play record and playback of the target user to video clip When, if the first video clip of the target user couple carries out fast-forward play, the broadcasting time of first video clip is remembered It is zero, if the second video clip of the target user couple plays back, the broadcasting time of second video clip is added one, Statistics obtains time watched of the target user to each video clip successively.
In the embodiment of the present invention three, by the target video of watching target user according to comprising actor information Video clip division is carried out, then by the way that hobby table of the target user to each performer has been calculated after calculating performer's weighted value Numerical value is levied, obtained characterization user by actor information accurate calculation characterizes numerical value, Neng Goucong to the hobby of performer's fancy grade The fancy grade to some performer is preferably embodied in performer's rank, that is, forms the video user in performer's level Precise positioning, compared with the prior art to some video entirety count, improve user portrait precision.
Term " first " and " second " in description and claims of this specification and above-mentioned attached drawing etc. are to be used for area Not different objects, rather than for describing specific sequence.In addition term " comprising " and " having " and their any deformations, It is intended to cover and non-exclusive includes.Such as it contains the process of series of steps or unit, method, system, product or sets It is standby not to be set in the step of having listed or unit, but the step of may include not listing or unit.
Each embodiment is described by the way of progressive in this specification, the highlights of each of the examples are with other The difference of embodiment, just to refer each other for identical similar portion between each embodiment.For device disclosed in embodiment For, since it is corresponded to the methods disclosed in the examples, so description is fairly simple, related place is said referring to method part It is bright.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest range caused.

Claims (10)

  1. A kind of method 1. structure video user is drawn a portrait, which is characterized in that including:
    The target video for obtaining target user, at least one video clip is divided by the target video;
    Determine the actor information in each video clip of the target video, and count obtain it is each in the actor information Each mesh in the target video is calculated in the occurrence number of target performer, the occurrence number based on each target performer Mark the weighted value of performer;
    According to the weighted value of each target performer in the target video, the target user is calculated to each target performer Hobby characterize numerical value, target user described in the hobby of each target performer characterization numerical generation is drawn based on the target user Picture.
  2. 2. according to the method described in claim 1, it is characterized in that, it is described obtain target user target video, by the mesh Mark video is divided at least one video clip, including:
    The target video of target user to getting parses, and obtains the actor information in the target video, wherein institute It includes at least one target performer to state actor information;
    Obtain the appearance duration of each target performer in the target video, and going out according to each target performer The target video is marked in current length;
    Target video after label is divided, at least one video clip is obtained.
  3. 3. according to the method described in claim 2, it is characterized in that, in each video clip of the determination target video Actor information, and count and obtain the occurrence number of each target performer in the actor information, be based on each target The weighted value of each target performer in the target video is calculated in the occurrence number of performer, including:
    Determine the actor information in each video clip of the target video;
    According to the actor information in each video clip, statistics obtains going out for corresponding target performer in each video clip Occurrence number;
    According to the number that target performer in each video clip occurs, the target complete that the target video is calculated is drilled The appearance total degree of member;
    The ratio for calculating total degree and the target performer occurrence number that the target complete performer occurs, the ratio is remembered For the weighted value of the target performer.
  4. 4. according to the method described in claim 3, it is characterized in that, described according to each target performer in the target video Weighted value is calculated the target user and characterizes numerical value to the hobby of each target performer, based on the target user to each Target user's portrait described in the hobby characterization numerical generation of a target performer, including:
    Detection obtains viewing behavior record of the target user to each video clip, and true according to the viewing behavior record Time watched of the fixed target user to each video clip;
    The product between the time watched of each video clip and the weighted value of the corresponding target performer of the video clip is calculated, it will The product is denoted as the target user and characterizes numerical value to the hobby of the target performer;
    Target user draws a portrait described in hobby characterization numerical generation according to the target user to each performer.
  5. 5. according to the method described in claim 4, it is characterized in that, when the viewing behavior record includes the target user couple When the fast-forward play record and playback of video clip, the detection obtains sight of the target user to each video clip It sees behavior record, and time watched of the target user to each video clip is determined according to the viewing behavior record, wrap It includes:
    If the first video clip of the target user couple carries out fast-forward play, the broadcasting time of first video clip is remembered It is zero, if the second video clip of the target user couple plays back, the broadcasting time of second video clip is added one, Statistics obtains time watched of the target user to each video clip successively.
  6. The device 6. a kind of structure video user is drawn a portrait, which is characterized in that including:
    The target video is divided at least one video clip by acquisition module, the target video for obtaining target user;
    Determining module, the actor information in each video clip for determining the target video, and count and obtain described drill The mesh is calculated in the occurrence number of each target performer in member's information, the occurrence number based on each target performer Mark the weighted value of each target performer in video;
    The target user is calculated for the weighted value according to each target performer in the target video in generation module Numerical value is characterized to the hobby of each target performer, numerical generation is characterized to the hobby of each target performer based on the target user Target user's portrait.
  7. 7. device according to claim 6, which is characterized in that the acquisition module includes:
    Resolution unit, the target video for the target user to getting parse, and obtain drilling in the target video Member's information, wherein the actor information includes at least one target performer;
    Marking unit, the appearance duration for obtaining each target performer in the target video, and according to described each The target video is marked in the appearance duration of a target performer;
    Division unit obtains at least one video clip for dividing the target video after label.
  8. 8. device according to claim 7, which is characterized in that the determining module includes:
    Determination unit, the actor information in each video clip for determining the target video;
    Statistic unit, for according to the actor information in each video clip, statistics to obtain corresponding in each video clip Target performer occurrence number;
    First computing unit, the number for being occurred according to target performer in each video clip, is calculated the mesh Mark the appearance total degree of the target complete performer of video;
    Second computing unit, for calculating the total degree and the target performer occurrence number that the target complete performer occurs The ratio is denoted as the weighted value of the target performer by ratio.
  9. 9. device according to claim 8, which is characterized in that the generation module includes:
    Detection unit, for detecting the viewing behavior record for obtaining the target user to each video clip, and according to described Viewing behavior record determines time watched of the target user to each video clip;
    Third computing unit, the power of the time watched target performer corresponding with the video clip for calculating each video clip The product is denoted as the target user and characterizes numerical value to the hobby of the target performer by the product between weight values;
    Generation unit is drawn for target user described in the hobby characterization numerical generation according to the target user to each performer Picture.
  10. 10. device according to claim 9, which is characterized in that the detection unit is specifically used for:
    When the viewing behavior record includes fast-forward play record and playback of the target user to video clip, if The first video clip of the target user couple carries out fast-forward play, then the broadcasting time of first video clip is denoted as zero, If the second video clip of the target user couple plays back, the broadcasting time of second video clip is added one, successively Statistics obtains time watched of the target user to each video clip.
CN201810262253.4A 2018-03-28 2018-03-28 Method and device for constructing video user portrait Active CN108471544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810262253.4A CN108471544B (en) 2018-03-28 2018-03-28 Method and device for constructing video user portrait

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810262253.4A CN108471544B (en) 2018-03-28 2018-03-28 Method and device for constructing video user portrait

Publications (2)

Publication Number Publication Date
CN108471544A true CN108471544A (en) 2018-08-31
CN108471544B CN108471544B (en) 2020-09-15

Family

ID=63265915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810262253.4A Active CN108471544B (en) 2018-03-28 2018-03-28 Method and device for constructing video user portrait

Country Status (1)

Country Link
CN (1) CN108471544B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109788311A (en) * 2019-01-28 2019-05-21 北京易捷胜科技有限公司 Personage's replacement method, electronic equipment and storage medium
CN110008376A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 User's portrait vector generation method and device
CN110598618A (en) * 2019-09-05 2019-12-20 腾讯科技(深圳)有限公司 Content recommendation method and device, computer equipment and computer-readable storage medium
CN110769286A (en) * 2019-11-06 2020-02-07 山东科技大学 Channel-based recommendation method and device and storage medium
CN111666908A (en) * 2020-06-09 2020-09-15 广州市百果园信息技术有限公司 Interest portrait generation method, device and equipment for video user and storage medium
CN112569596A (en) * 2020-12-11 2021-03-30 腾讯科技(深圳)有限公司 Video picture display method and device, computer equipment and storage medium
CN113938712A (en) * 2021-10-13 2022-01-14 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521340A (en) * 2011-12-08 2012-06-27 中国科学院自动化研究所 Method for analyzing TV video based on role
CN103702117A (en) * 2012-09-27 2014-04-02 索尼公司 Image processing apparatus, image processing method, and program
US20140096160A1 (en) * 2012-10-01 2014-04-03 Chunghwa Wideband Best Network Co., Ltd. Electronic program guide display method and system
CN105072495A (en) * 2015-08-13 2015-11-18 天脉聚源(北京)传媒科技有限公司 Statistics method and device for person popularity and program pushing method and device
CN105095431A (en) * 2015-07-22 2015-11-25 百度在线网络技术(北京)有限公司 Method and device for pushing videos based on behavior information of user
CN105701169A (en) * 2015-12-31 2016-06-22 北京奇艺世纪科技有限公司 Film and television program retrieving method and terminal
CN105843857A (en) * 2016-03-16 2016-08-10 合网络技术(北京)有限公司 Video recommendation method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521340A (en) * 2011-12-08 2012-06-27 中国科学院自动化研究所 Method for analyzing TV video based on role
CN103702117A (en) * 2012-09-27 2014-04-02 索尼公司 Image processing apparatus, image processing method, and program
US20140096160A1 (en) * 2012-10-01 2014-04-03 Chunghwa Wideband Best Network Co., Ltd. Electronic program guide display method and system
CN105095431A (en) * 2015-07-22 2015-11-25 百度在线网络技术(北京)有限公司 Method and device for pushing videos based on behavior information of user
CN105072495A (en) * 2015-08-13 2015-11-18 天脉聚源(北京)传媒科技有限公司 Statistics method and device for person popularity and program pushing method and device
CN105701169A (en) * 2015-12-31 2016-06-22 北京奇艺世纪科技有限公司 Film and television program retrieving method and terminal
CN105843857A (en) * 2016-03-16 2016-08-10 合网络技术(北京)有限公司 Video recommendation method and device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109788311A (en) * 2019-01-28 2019-05-21 北京易捷胜科技有限公司 Personage's replacement method, electronic equipment and storage medium
CN109788311B (en) * 2019-01-28 2021-06-04 北京易捷胜科技有限公司 Character replacement method, electronic device, and storage medium
CN110008376A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 User's portrait vector generation method and device
CN110598618A (en) * 2019-09-05 2019-12-20 腾讯科技(深圳)有限公司 Content recommendation method and device, computer equipment and computer-readable storage medium
CN110769286A (en) * 2019-11-06 2020-02-07 山东科技大学 Channel-based recommendation method and device and storage medium
CN110769286B (en) * 2019-11-06 2021-04-27 山东科技大学 Channel-based recommendation method and device and storage medium
CN111666908A (en) * 2020-06-09 2020-09-15 广州市百果园信息技术有限公司 Interest portrait generation method, device and equipment for video user and storage medium
CN112569596A (en) * 2020-12-11 2021-03-30 腾讯科技(深圳)有限公司 Video picture display method and device, computer equipment and storage medium
CN113938712A (en) * 2021-10-13 2022-01-14 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment
CN113938712B (en) * 2021-10-13 2023-10-10 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment

Also Published As

Publication number Publication date
CN108471544B (en) 2020-09-15

Similar Documents

Publication Publication Date Title
CN108471544A (en) A kind of structure video user portrait method and device
CN107832437B (en) Audio/video pushing method, device, equipment and storage medium
CN109189951B (en) Multimedia resource recommendation method, equipment and storage medium
CN106454536B (en) The determination method and device of information recommendation degree
Jawaheer et al. Comparison of implicit and explicit feedback from an online music recommendation service
KR101949308B1 (en) Sentimental information associated with an object within media
US20150312603A1 (en) Recommending media items based on take rate signals
US11188603B2 (en) Annotation of videos using aggregated user session data
CN109040795B (en) Video recommendation method and system
CN106131703A (en) A kind of method of video recommendations and terminal
US20130232516A1 (en) Method And Apparatus for Collection and Analysis of Real-Time Audience Feedback
CN104021140B (en) A kind of processing method and processing device of Internet video
CN102929966B (en) A kind of for providing the method and system of personalized search list
CN104506894A (en) Method and device for evaluating multi-media resources
EP3096323A1 (en) Identifying media content
US20150213469A1 (en) Methods and apparatus to determine audience engagement indices associated with media presentations
KR20120051401A (en) Modeling user interest pattern server and method for modeling user interest pattern
CN103997662A (en) Program pushing method and system
CN105897847A (en) Information push method and device
Jawaheer et al. Characterisation of explicit feedback in an online music recommendation service
CN105843876A (en) Multimedia resource quality assessment method and apparatus
CN111435371A (en) Video recommendation method and system, computer program product and readable storage medium
Xu et al. Catch-up TV recommendations: show old favourites and find new ones
CN103442270B (en) A kind of method and device for the viewing-data for gathering user
CN106534984B (en) Television program pushing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant