CN109936756A - The determination method and device of video cover - Google Patents

The determination method and device of video cover Download PDF

Info

Publication number
CN109936756A
CN109936756A CN201711353892.3A CN201711353892A CN109936756A CN 109936756 A CN109936756 A CN 109936756A CN 201711353892 A CN201711353892 A CN 201711353892A CN 109936756 A CN109936756 A CN 109936756A
Authority
CN
China
Prior art keywords
time
user
group
watching behavior
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711353892.3A
Other languages
Chinese (zh)
Inventor
郭维维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Youku Information Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Youku Information Technology Beijing Co Ltd filed Critical Youku Information Technology Beijing Co Ltd
Priority to CN201711353892.3A priority Critical patent/CN109936756A/en
Publication of CN109936756A publication Critical patent/CN109936756A/en
Pending legal-status Critical Current

Links

Abstract

This disclosure relates to the determination method and device of video cover.This method comprises: determining the corresponding total watched time of various time points of target video;According to the corresponding total watched time of the various time points of the target video, the material time point in the target video is determined;The cover of the target video is determined according to the video frame near the material time point.The disclosure passes through the corresponding total watched time of various time points for determining target video, according to the corresponding total watched time of the various time points of target video, determine the material time point in target video, and the cover of target video is determined according to the video frame near material time point, thus, it is possible to the covers that the corresponding total watched time of the various time points based on target video flexibly determines target video, human resources are greatly saved, and identified cover can more embody the content for attracting user in target video.

Description

The determination method and device of video cover
Technical field
This disclosure relates to video technique field more particularly to a kind of determination method and device of video cover.
Background technique
In the related technology, usually by video uploader or the cover of video website operation personnel's selecting video.Video network Operation personnel stand in the cover of selecting video, it is sometimes desirable to finish watching entire video and could choose it and think appropriate video frame Cover as video.The determination method of this video cover needs to expend a large amount of human resources, and efficiency is lower, and the view chosen Frequency frame not necessarily can most attract user.
Summary of the invention
In view of this, the present disclosure proposes a kind of determination method and devices of video cover.
According to the one side of the disclosure, a kind of determination method of video cover is provided, comprising:
Determine the corresponding total watched time of the various time points of target video;
According to the corresponding total watched time of the various time points of the target video, the key in the target video is determined Time point;
The cover of the target video is determined according to the video frame near the material time point.
In one possible implementation, according to the corresponding total watched time of the various time points of the target video, Determine the material time point in the target video, comprising:
Determine the maximum point in the corresponding total watched time of the various time points of the target video;
According to the maximum point, the material time point in the target video is determined.
In one possible implementation, the target video is determined according to the video frame near the material time point Cover, comprising:
Video frame according to the image information of the video frame near the material time point, near the material time point In filter out multiple candidate video frames for user select;
The candidate video frame that the user selects is determined as to the static cover of the target video.
In one possible implementation, the target video is determined according to the video frame near the material time point Cover, comprising:
According to the image information of the video frame near the material time point, filtered out near the material time point more A candidate video segment is selected for user;
The candidate video segment that the user selects is determined as to the dynamic cover of the target video.
In one possible implementation, the target video is determined according to the video frame near the material time point Cover, comprising:
According to the image information of the video frame near the material time point, from the video near the material time point Screen video frame;
The video frame filtered out is determined as to the static cover of the target video.
In one possible implementation, the target video is determined according to the video frame near the material time point Cover, comprising:
According to the image information of the video frame near the material time point, video is screened near the material time point Segment;
The video clip filtered out is determined as to the dynamic cover of the target video.
In one possible implementation, the image information of the video frame includes the clarity, right of the video frame Than one or more in degree, saturation degree and acutance.
In one possible implementation, the corresponding total watched time of the various time points of target video is determined, comprising:
Obtain the corresponding user's watching behavior data of target video, wherein each group of user's watching behavior data correspond to User's watching behavior;
It determines in the corresponding user's watching behavior of each group of user's watching behavior data for each of the target video The watched time at time point;
According to the watched time, the corresponding total watched time of the various time points of the target video is determined.
In one possible implementation, in the corresponding total watched time of various time points for determining the target video Later, the method also includes:
According to the corresponding total watched time of the various time points of the target video, it is corresponding total to generate the target video Watched time curve.
In one possible implementation, it determines in the corresponding user's watching behavior of each group of user's watching behavior data For the watched time of the various time points of the target video, comprising:
For each group of user's watching behavior data, in the corresponding user's watching behavior of this group of user's watching behavior data There are the viewing start time points in the case where dragging behavior, obtained in this group of user's watching behavior data, viewing ending time Point pulls sart point in time and pulls end time point;
According to the viewing start time point and viewing ending time point, this group of user's watching behavior data pair are determined For the basic watched time of the various time points of the target video in the user's watching behavior answered;
According to the dragging sart point in time and the dragging end time point, this group of user's watching behavior data pair are determined For the adjustment watched time of the various time points of the target video in the user's watching behavior answered;
According in the corresponding user's watching behavior of this group of user's watching behavior data be directed to the target video it is each when Between regarded for the target in the basic watched time and the corresponding user's watching behavior of this group of user's watching behavior data put The adjustment watched time of the various time points of frequency is determined and is directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time of the various time points of the target video.
In one possible implementation, according to the viewing start time point and viewing ending time point, really For the basis of the various time points of the target video in the fixed corresponding user's watching behavior of this group of user's watching behavior data Watched time, comprising:
Determine first group that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The basic watched time at time point is 1, wherein first group of time point includes the viewing start time point and the viewing Various time points between end time point;
Determine second group that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The basic watched time at time point be 0, wherein second group of time point include the viewing start time point before it is each Various time points after time point and viewing ending time point.
In one possible implementation, according to the dragging sart point in time and the dragging end time point, really For the adjustment of the various time points of the target video in the fixed corresponding user's watching behavior of this group of user's watching behavior data Watched time, comprising:
In the case where the dragging sart point in time is later than the dragging end time point, determine that this group of user watches row Adjustment watched time to be directed to the third group time point of the target video in the corresponding user's watching behavior of data is 1, In, the third group time point includes the dragging sart point in time and each time pulled between end time point Point;In the case where the dragging sart point in time is earlier than the dragging end time point, this group of user's watching behavior number is determined Adjustment watched time according to the third group time point for being directed to the target video in corresponding user's watching behavior is -1;
Alternatively,
In the case where the dragging sart point in time is later than the dragging end time point, determine that this group of user watches row Adjustment watched time to be directed to the third group time point of the target video in the corresponding user's watching behavior of data is -1;? In the case that the dragging sart point in time is earlier than the dragging end time point, determine that this group of user's watching behavior data are corresponding User's watching behavior in for the target video third group time point adjustment watched time be 1.
In one possible implementation, according to needle in the corresponding user's watching behavior of this group of user's watching behavior data The corresponding user of basic watched time and this group of user's watching behavior data to the various time points of the target video sees It sees in behavior for the adjustment watched time of the various time points of the target video, determines this group of user's watching behavior data pair For the watched time of the various time points of the target video in the user's watching behavior answered, comprising:
Each time of the target video will be directed in the corresponding user's watching behavior of this group of user's watching behavior data For the target video in basic watched time user's watching behavior corresponding with this group of user's watching behavior data of point The adjustment watched time of various time points is added, and obtains being directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time of the various time points of the target video;
Alternatively,
Each time of the target video will be directed in the corresponding user's watching behavior of this group of user's watching behavior data For the target video in basic watched time user's watching behavior corresponding with this group of user's watching behavior data of point The adjustment watched time of various time points is subtracted each other, and obtains being directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time of the various time points of the target video.
In one possible implementation, it determines in the corresponding user's watching behavior of each group of user's watching behavior data For the watched time of the various time points of the target video, comprising:
For each group of user's watching behavior data, in the corresponding user's watching behavior of this group of user's watching behavior data There is no at the end of the viewing start time point in the case where dragging behavior, obtained in this group of user's watching behavior data and viewing Between point;
Determine first group that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time at time point is 1, wherein first group of time point includes that the viewing start time point terminates with the viewing Various time points between time point;
Determine second group that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time at time point be 0, wherein second group of time point include the viewing start time point before each time Various time points after point and viewing ending time point.
According to another aspect of the present disclosure, a kind of determining device of video cover is provided, comprising:
First determining module, the corresponding total watched time of various time points for determining target video;
Second determining module determines institute for the corresponding total watched time of the various time points according to the target video State the material time point in target video;
Third determining module, for determining the envelope of the target video according to the video frame near the material time point Face.
In one possible implementation, second determining module includes:
First determines submodule, the pole in the corresponding total watched time of various time points for determining the target video Big value point;
Second determines submodule, for determining the material time point in the target video according to the maximum point.
In one possible implementation, the third determining module includes:
First screening submodule, for the image information according to the video frame near the material time point, from the pass Multiple candidate video frames are filtered out in video frame near key time point to select for user;
Third determines submodule, and the candidate video frame for selecting the user is determined as the static state of the target video Cover.
In one possible implementation, the third determining module includes:
Second screening submodule, for the image information according to the video frame near the material time point, from the pass Key time point nearby filters out multiple candidate video segments and selects for user;
4th determines submodule, and the candidate video segment for selecting the user is determined as the dynamic of the target video State cover.
In one possible implementation, the third determining module includes:
Third screens submodule, for the image information according to the video frame near the material time point, from the pass Video frame is screened in video near key time point;
5th determines submodule, for the video frame filtered out to be determined as to the static cover of the target video.
In one possible implementation, the third determining module includes:
4th screening submodule, for the image information according to the video frame near the material time point, from the pass Key time point nearby screens video clip;
6th determines submodule, for the video clip filtered out to be determined as to the dynamic cover of the target video.
In one possible implementation, the image information of the video frame includes the clarity, right of the video frame Than one or more in degree, saturation degree and acutance.
In one possible implementation, first determining module includes:
Acquisition submodule, for obtaining the corresponding user's watching behavior data of the target video, wherein each group of user Watching behavior data correspond to user's watching behavior;
7th determines submodule, is directed to for determining in the corresponding user's watching behavior of each group of user's watching behavior data The watched time of the various time points of the target video;
8th determines submodule, for determining that the various time points of the target video are corresponding according to the watched time Total watched time.
In one possible implementation, the described 7th determine that submodule includes:
Acquiring unit is used for for each group of user's watching behavior data, corresponding in this group of user's watching behavior data There are the viewing start times in the case where dragging behavior, obtained in this group of user's watching behavior data in user's watching behavior Point, viewing ending time point pull sart point in time and pull end time point;
First determination unit, for determining the group according to the viewing start time point and viewing ending time point For the basis viewing time of the various time points of the target video in the corresponding user's watching behavior of user's watching behavior data Number;
Second determination unit, for determining the group according to the dragging sart point in time and the dragging end time point For the adjustment viewing time of the various time points of the target video in the corresponding user's watching behavior of user's watching behavior data Number;
Third determination unit, for according to being directed in the corresponding user's watching behavior of this group of user's watching behavior data The basic watched time of the various time points of target video and the corresponding user's watching behavior of this group of user's watching behavior data In for the target video various time points adjustment watched time, determine the corresponding use of this group of user's watching behavior data For the watched time of the various time points of the target video in the watching behavior of family.
In one possible implementation, first determination unit is used for:
Determine first group that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The basic watched time at time point is 1, wherein first group of time point includes the viewing start time point and the viewing Various time points between end time point;
Determine second group that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The basic watched time at time point be 0, wherein second group of time point include the viewing start time point before it is each Various time points after time point and viewing ending time point.
In one possible implementation, second determination unit is used for:
In the case where the dragging sart point in time is later than the dragging end time point, determine that this group of user watches row Adjustment watched time to be directed to the third group time point of the target video in the corresponding user's watching behavior of data is 1, In, the third group time point includes the dragging sart point in time and each time pulled between end time point Point;In the case where the dragging sart point in time is earlier than the dragging end time point, this group of user's watching behavior number is determined Adjustment watched time according to the third group time point for being directed to the target video in corresponding user's watching behavior is -1;
Alternatively,
In the case where the dragging sart point in time is later than the dragging end time point, determine that this group of user watches row Adjustment watched time to be directed to the third group time point of the target video in the corresponding user's watching behavior of data is -1;? In the case that the dragging sart point in time is earlier than the dragging end time point, determine that this group of user's watching behavior data are corresponding User's watching behavior in for the target video third group time point adjustment watched time be 1.
In one possible implementation, the third determination unit is used for:
Each time of the target video will be directed in the corresponding user's watching behavior of this group of user's watching behavior data For the target video in basic watched time user's watching behavior corresponding with this group of user's watching behavior data of point The adjustment watched time of various time points is added, and obtains being directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time of the various time points of the target video;
Alternatively,
Each time of the target video will be directed in the corresponding user's watching behavior of this group of user's watching behavior data For the target video in basic watched time user's watching behavior corresponding with this group of user's watching behavior data of point The adjustment watched time of various time points is subtracted each other, and obtains being directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time of the various time points of the target video.
In one possible implementation, the described 7th determine that submodule is used for:
For each group of user's watching behavior data, in the corresponding user's watching behavior of this group of user's watching behavior data There is no at the end of the viewing start time point in the case where dragging behavior, obtained in this group of user's watching behavior data and viewing Between point;
Determine first group that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time at time point is 1, wherein first group of time point includes that the viewing start time point terminates with the viewing Various time points between time point;
Determine second group that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time at time point be 0, wherein second group of time point include the viewing start time point before each time Various time points after point and viewing ending time point.
According to another aspect of the present disclosure, a kind of determining device of video cover is provided, comprising: processor;For depositing Store up the memory of processor-executable instruction;Wherein, the processor is configured to executing the above method.
According to another aspect of the present disclosure, a kind of non-volatile computer readable storage medium storing program for executing is provided, is stored thereon with Computer program instructions, wherein the computer program instructions realize the above method when being executed by processor.
The determination method and device of the video cover of all aspects of this disclosure passes through the various time points for determining target video Corresponding total watched time determines the pass in target video according to the corresponding total watched time of the various time points of target video Key time point, and determine according to the video frame near material time point the cover of target video, thus, it is possible to be based on target video The corresponding total watched time of various time points flexibly determine the cover of target video, be greatly saved human resources, and institute is really Fixed cover can more embody the content for attracting user in target video.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become It is clear.
Detailed description of the invention
Comprising in the description and constituting the attached drawing of part of specification and specification together illustrates the disclosure Exemplary embodiment, feature and aspect, and for explaining the principles of this disclosure.
Fig. 1 shows the flow chart of the determination method of the video cover according to one embodiment of the disclosure.
Fig. 2 shows the illustrative processes according to the determination method and step S12 of the video cover of one embodiment of the disclosure Figure.
Fig. 3 shows the corresponding total viewing time of target video in the determination method according to the video cover of one embodiment of the disclosure The schematic diagram of number curve.
Fig. 4 shows the various time points pair of target video in the determination method according to the video cover of one embodiment of the disclosure The schematic diagram for the maximum point in total watched time answered.
Fig. 5 shows an illustrative process of the determination method and step S13 according to the video cover of one embodiment of the disclosure Figure.
Fig. 6 shows an illustrative process of the determination method and step S13 according to the video cover of one embodiment of the disclosure Figure.
Fig. 7 shows an illustrative process of the determination method and step S13 according to the video cover of one embodiment of the disclosure Figure.
Fig. 8 shows an illustrative process of the determination method and step S13 according to the video cover of one embodiment of the disclosure Figure.
Fig. 9 shows an illustrative process of the determination method and step S11 according to the video cover of one embodiment of the disclosure Figure.
Figure 10 shows an illustrative stream of the determination method and step S112 according to the video cover of one embodiment of the disclosure Cheng Tu.
Figure 11 shows an illustrative stream of the determination method and step S112 according to the video cover of one embodiment of the disclosure Cheng Tu.
Figure 12 shows the block diagram of the determining device of the video cover according to one embodiment of the disclosure.
Figure 13 shows the block diagram of the determining device of the video cover according to one embodiment of the disclosure.
Figure 14 is a kind of block diagram of the device 800 of determination for video cover shown according to an exemplary embodiment.
Figure 15 is a kind of block diagram of the device 1900 of determination for video cover shown according to an exemplary embodiment.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, giving numerous details in specific embodiment below to better illustrate the disclosure. It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Fig. 1 shows the flow chart of the determination method of the video cover according to one embodiment of the disclosure.As shown in Figure 1, the party Method includes step S11 to step S13.
In step s 11, the corresponding total watched time of the various time points of target video is determined.
Wherein, target video can refer to any video it needs to be determined that cover.
In the present embodiment, the corresponding total watched time of the various time points of target video can embody to a certain extent Attraction of the time point corresponding video content to user.For example, the corresponding total viewing time of the sometime point of target video Number is more, then may indicate that the time point corresponding video content is bigger to the attraction of user;Target video is sometime The corresponding total watched time of point is fewer, then may indicate that the time point corresponding video content is smaller to the attraction of user.
In step s 12, it according to the corresponding total watched time of the various time points of target video, determines in target video Material time point.
In one possible implementation, it can reject the preceding K time point of target video, and by remaining time point In total watched time most time point be determined as the material time point in target video.For example, K is equal to 700.Since user is normal Often of short duration try abandons viewing video after video, and therefore, the various time points that will lead to a bit of time before video are corresponding Total watched time is more, but this not can be shown that these time points corresponding video content is larger to the attraction of user.The realization Mode helps to select to the attraction of user biggish time point as pass by the preceding K time point of rejecting target video Key time point.
In alternatively possible implementation, can be most by watched time total in the various time points of target video Time point is determined as the material time point in target video.
In step s 13, the cover of target video is determined according to the video frame near material time point.
In one possible implementation, the cover of target video may include one in static cover and dynamic cover Kind or two kinds.
The present embodiment passes through the corresponding total watched time of various time points for determining target video, according to each of target video A time point corresponding total watched time, determines the material time point in target video, and according to the view near material time point Frequency frame determines the cover of target video, and thus, it is possible to the corresponding total watched time of the various time points based on target video is flexibly true Set the goal the cover of video, human resources is greatly saved, and identified cover can more embody and attract user in target video Content.
Fig. 2 shows the illustrative processes according to the determination method and step S12 of the video cover of one embodiment of the disclosure Figure.As shown in Fig. 2, step S12 may include step S121 and step S122.
In step S121, the maximum point in the corresponding total watched time of the various time points of target video is determined.
As an example of the present embodiment, can according to the corresponding total watched time of various time points of target video, Generate the corresponding total watched time curve of target video.Fig. 3 shows the determination side of the video cover according to one embodiment of the disclosure The schematic diagram of the corresponding total watched time curve of target video in method.In Fig. 3, abscissa indicates time point, and ordinate indicates Total watched time.For example, the unit of abscissa is the second.The function derivation of total watched time curve corresponding to target video, can To determine the maximum point in total watched time curve.Fig. 4 shows the determination of the video cover according to one embodiment of the disclosure The schematic diagram of maximum point in method in the corresponding total watched time of the various time points of target video.In Fig. 4, abscissa Indicate time point, ordinate indicates total watched time.For example, the unit of abscissa is the second.The number of maximum point can be more It is a, or 1.
In step S122, according to the maximum point, the material time point in target video is determined.
In one possible implementation, the material time that all maximum points can be determined as in target video Point.
It, can be by institute in the case where the number of maximum point is less than or equal to M in alternatively possible implementation There is maximum point to be determined as the material time point in target video;It, can will be total in the case where the number of maximum point is greater than M The maximum M maximum point of watched time is determined as the material time point in target video.Wherein, M is positive integer.
Fig. 5 shows an illustrative process of the determination method and step S13 according to the video cover of one embodiment of the disclosure Figure.As shown in figure 5, step S13 may include step S131 and step S132.
In step S131, according to the image information of the video frame near material time point, near material time point Multiple candidate video frames are filtered out in video frame to select for user.
In one possible implementation, the image information of video frame includes the clarity, contrast, saturation of video frame It is one or more in degree and acutance.
In one possible implementation, N seconds to N seconds after material time point before material time point piece of video can be taken All video frames in section, wherein N is integer;For N seconds before material time point to after material time point in N seconds video clips Each video frame can determine the video frame corresponding according to the corresponding total watched time of the time point where the video frame One score determines corresponding second score of the video frame according to the clarity of the video frame, true according to the contrast of the video frame The fixed corresponding third score of the video frame, determines corresponding 4th score of the video frame according to the saturation degree of the video frame, according to The acutance of the video frame determines corresponding 5th score of the video frame;According to the first score, the corresponding weight of total watched time, The corresponding weight of two scores, clarity, third score, the corresponding weight of contrast, the 4th score, the corresponding weight of saturation degree, 5th score and the corresponding weight of acutance, determine the total score of the video frame;The highest L video frame of total score is determined as waiting Select video frame, wherein L is the integer greater than 1.
In step S132, the candidate video frame that user selects is determined as to the static cover of target video.
The present embodiment is selected by providing a user multiple candidate video frames for user, thus makes the user do not need to finish watching entire It just can determine that the cover of target video after target video, so that human resources be greatly saved, save determining video cover Time.
Fig. 6 shows an illustrative process of the determination method and step S13 according to the video cover of one embodiment of the disclosure Figure.As shown in fig. 6, step S13 may include step S133 and step S134.
In step S133, according to the image information of the video frame near material time point, sieved near material time point Multiple candidate video segments are selected to select for user.
In one possible implementation, methodology above can be used, determine before material time point N seconds to key when Between put after in N seconds video clips each video frame score.According to the score of each video frame, material time point can be determined The score of first N seconds each sub-video segment into after material time point N seconds.For example, can be by videos all in sub-video segment The average value of the total score of frame is determined as the score of the sub-video segment.The P of highest scoring sub- video clips can be determined For candidate video segment, wherein P is the integer greater than 1.
For example, the duration of candidate video segment can be 2 to 3 seconds, it is not limited thereto.
In step S134, the candidate video segment that user selects is determined as to the dynamic cover of target video.
The present embodiment is selected by providing a user multiple candidate video segments for user, thus makes the user do not need to see complete It just can determine that the cover of target video after a target video, so that human resources be greatly saved, save determining video cover Time.
Fig. 7 shows an illustrative process of the determination method and step S13 according to the video cover of one embodiment of the disclosure Figure.As shown in fig. 7, step S13 may include step S135 and step S136.
In step S135, according to the image information of the video frame near material time point, near material time point Video frame is screened in video.
In step S136, the video frame filtered out is determined as to the static cover of target video.
In one possible implementation, methodology above can be used, determine before material time point N seconds to key when Between put after in N seconds video clips each video frame score, and the video frame of highest scoring can be filtered out, most by the score High video frame is determined as the static cover of target video.
The present embodiment is according to the image information of the video frame near material time point, from the video near material time point Video frame is screened, and the video frame filtered out is determined as to the static cover of target video, thus, it is possible to automatically determine target view The cover of frequency saves the time of determining video cover so that human resources be greatly saved without user's artificial selection.
Fig. 8 shows an illustrative process of the determination method and step S13 according to the video cover of one embodiment of the disclosure Figure.As shown in figure 8, step S13 may include step S137 and step S138.
In step S137, according to the image information of the video frame near material time point, sieved near material time point Select video clip.
In step S138, the video clip filtered out is determined as to the dynamic cover of target video.
In one possible implementation, methodology above can be used, determine before material time point N seconds to key when Between put after in N second video clips each sub-video segment score, and the sub-video segment of highest scoring can be filtered out, general The sub-video segment of the highest scoring is determined as the dynamic cover of target video.
The present embodiment screens video according to the image information of the video frame near material time point near material time point Segment, and the video clip filtered out is determined as to the dynamic cover of target video, thus, it is possible to automatically determine target video Cover saves the time of determining video cover so that human resources be greatly saved without user's artificial selection.
Fig. 9 shows an illustrative process of the determination method and step S11 according to the video cover of one embodiment of the disclosure Figure.As shown in figure 9, step S11 may include step S111 to step S113.
In step S111, the corresponding user's watching behavior data of target video are obtained, wherein each group of user watches row Correspond to user's watching behavior for data.
In one possible implementation, user's watching behavior data may include starting for the viewing of target video Time point and viewing ending time point.For example, the when a length of 00:45:23 of target video, a certain group of user's watching behavior data needle Viewing start time point to target video is 00:00:30, this group of user's watching behavior data are directed to the viewing knot of target video Beam time point is 00:43:44.
In alternatively possible implementation, user's watching behavior data may include opening for the viewing of target video Begin time point, viewing ending time point, dragging sart point in time and dragging end time point.
It in this implementation, can not include dragging in the corresponding user's watching behavior of each group of user's watching behavior data Behavior is dragged, also may include one or many dragging behaviors.In a certain group of user's watching behavior data, corresponding user watches row It may include multiple dragging sart point in times in this group of user's watching behavior data in the case where including multiple dragging behavior in With multiple dragging end time points.
As an example of the implementation, dragging end time point can be later than by pulling sart point in time, that is, user Progress bar can be dragged, forward to repeat to watch a certain video clip.For example, the when a length of 00:45:23 of target video, a certain Group user's watching behavior data are 00:03:30, this group of user's watching behavior data for the viewing start time point of target video Viewing ending time point for target video is 00:25:44, this group of user's watching behavior data are directed to the dragging of target video Sart point in time is 00:15:32, this group of user's watching behavior data are 00:10 for the dragging end time point of target video: 12。
As another example of the implementation, pulling sart point in time can be earlier than dragging end time point, that is, uses Family can drag progress bar backward, to skip a certain video clip of viewing.For example, the when a length of 00:45:23 of target video, certain One group of user's watching behavior data is 00:03:30, this group of user's watching behavior number for the viewing start time point of target video It is 00:25:44 according to the viewing ending time point for target video, this group of user's watching behavior data are dragged for target video Dragging sart point in time is 00:10:12, this group of user's watching behavior data are 00 for the dragging end time point of target video: 15:32。
In one possible implementation, the corresponding user's watching behavior data of target video are obtained, may include: to obtain Take the corresponding user's watching behavior data of target video in designated time period.For example, designated time period can be nearest one month Or nearest three months etc..
In alternatively possible implementation, the corresponding user's watching behavior data of target video are obtained, may include: Obtain the corresponding all user's watching behavior data of target video.
In step S112, determines in the corresponding user's watching behavior of each group of user's watching behavior data and regarded for target The watched time of the various time points of frequency.
In the present embodiment, for target video in the corresponding user's watching behavior of each group of user's watching behavior data The watched time of different time points may be different.
In one possible implementation, the per second as a time point of target video can be determined every For the watched time of the various time points of target video in the corresponding user's watching behavior of one group of user's watching behavior data.
In step S113, according to watched time, the corresponding total watched time of the various time points of target video is determined.
In the present embodiment, each group of user's watching behavior data corresponding for the target video of acquisition determine be somebody's turn to do respectively After the watched time for the various time points for being directed to target video in the corresponding user's watching behavior of group user's watching behavior data, The viewing of the same time point of target video can will be directed in the corresponding user's watching behavior of each group user's watching behavior data Number is added, and obtains the corresponding total watched time of various time points of target video.For example, by each group user's watching behavior data It is added in corresponding user's watching behavior for the 500th second watched time of target video, the of available target video 500 seconds corresponding total watched times.
Figure 10 shows an illustrative stream of the determination method and step S112 according to the video cover of one embodiment of the disclosure Cheng Tu.As shown in Figure 10, step S112 may include step S1121 to step S1124.
It is corresponding in this group of user's watching behavior data for each group of user's watching behavior data in step S1121 There are the viewing start times in the case where dragging behavior, obtained in this group of user's watching behavior data in user's watching behavior Point, viewing ending time point pull sart point in time and pull end time point.
Wherein, dragging behavior can refer to operation player progress bar when user watches video or use gesture, shortcut key jump Cross or play back the behavior of video content.
In step S1122, according to viewing start time point and viewing ending time point, this group of user's watching behavior is determined For the basic watched time of the various time points of target video in the corresponding user's watching behavior of data.
In one possible implementation, according to viewing start time point and viewing ending time point, determine that the group is used It, can be with for the basic watched time of the various time points of target video in the corresponding user's watching behavior of family watching behavior data It comprises determining that in the corresponding user's watching behavior of this group of user's watching behavior data for first group of time point of target video Basic watched time is 1, wherein first group of time point includes each between viewing start time point and viewing ending time point Time point;Determine second group of time point that target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data Basic watched time be 0, wherein second group of time point include viewing start time point before various time points and viewing Various time points after end time point.
For example, the when a length of 00:45:23 of target video, a certain group of user's watching behavior data are directed to the sight of target video See that sart point in time is 00:00:30, this group of user's watching behavior data are 00 for the viewing ending time point of target video: 43:44 can then determine first group that target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The basic watched time at time point is 1, and can determine needle in the corresponding user's watching behavior of this group of user's watching behavior data Basic watched time to second group of time point of target video is 0, wherein first group of time point includes 00:00:30 to 00: Various time points between 43:44, second group of time point include 00:00:00 between 00:00:29 various time points and 00:43:45 is to the various time points between 00:45:23.
In step S1123, according to pulling sart point in time and pulling end time point, this group of user's watching behavior is determined For the adjustment watched time of the various time points of target video in the corresponding user's watching behavior of data.
In one possible implementation, according to pulling sart point in time and pulling end time point, determine that the group is used It, can be with for the adjustment watched time of the various time points of target video in the corresponding user's watching behavior of family watching behavior data It include: to determine that this group of user's watching behavior data are corresponding in the case where dragging sart point in time is later than and pulls end time point User's watching behavior in for target video third group time point adjustment watched time be 1, wherein third group time point Including pulling sart point in time and pulling the various time points between end time point;It is tied pulling sart point in time earlier than dragging In the case where beam time point, determine in the corresponding user's watching behavior of this group of user's watching behavior data for the of target video The adjustment watched time at three groups of time points is -1.
For example, the when a length of 00:45:23 of target video, a certain group of user's watching behavior data are directed to the sight of target video See that sart point in time is 00:03:30, this group of user's watching behavior data are 00 for the viewing ending time point of target video: 25:44, this group of user's watching behavior data are 00:15:32, this group of user's viewing for the dragging sart point in time of target video Behavioral data is 00:10:12 for the dragging end time point of target video, then can determine that dragging sart point in time is later than and drag Drag end time point.In such a case, it is possible to determine needle in the corresponding user's watching behavior of this group of user's watching behavior data Adjustment watched time to the third group time point of target video is 1, wherein third group time point includes 00:10:12 to 00: The various time points of 15:32.
For another example, the when a length of 00:45:23 of target video, a certain group of user's watching behavior data are directed to the sight of target video See that sart point in time is 00:03:30, this group of user's watching behavior data are 00 for the viewing ending time point of target video: 25:44, this group of user's watching behavior data are 00:10:12, this group of user's viewing for the dragging sart point in time of target video Behavioral data is 00:15:32 for the dragging end time point of target video, then can determine and pull sart point in time earlier than dragging Drag end time point.In such a case, it is possible to determine needle in the corresponding user's watching behavior of this group of user's watching behavior data Adjustment watched time to the third group time point of target video is -1, wherein third group time point includes 00:10:12 to 00: The various time points of 15:32.
In alternatively possible implementation, according to pulling sart point in time and pulling end time point, the group is determined It, can for the adjustment watched time of the various time points of target video in the corresponding user's watching behavior of user's watching behavior data To include: to determine this group of user's watching behavior data pair in the case where dragging sart point in time is later than and pulls end time point Adjustment watched time in the user's watching behavior answered for the third group time point of target video is -1;Pulling the time started In the case that point is earlier than end time point is pulled, determines and be directed in the corresponding user's watching behavior of this group of user's watching behavior data The adjustment watched time at the third group time point of target video is 1.
For example, the when a length of 00:45:23 of target video, a certain group of user's watching behavior data are directed to the sight of target video See that sart point in time is 00:03:30, this group of user's watching behavior data are 00 for the viewing ending time point of target video: 25:44, this group of user's watching behavior data are 00:15:32, this group of user's viewing for the dragging sart point in time of target video Behavioral data is 00:10:12 for the dragging end time point of target video, then can determine that dragging sart point in time is later than and drag Drag end time point.In such a case, it is possible to determine needle in the corresponding user's watching behavior of this group of user's watching behavior data Adjustment watched time to the third group time point of target video is -1, wherein third group time point includes 00:10:12 to 00: The various time points of 15:32.
For another example, the when a length of 00:45:23 of target video, a certain group of user's watching behavior data are directed to the sight of target video See that sart point in time is 00:03:30, this group of user's watching behavior data are 00 for the viewing ending time point of target video: 25:44, this group of user's watching behavior data are 00:10:12, this group of user's viewing for the dragging sart point in time of target video Behavioral data is 00:15:32 for the dragging end time point of target video, then can determine and pull sart point in time earlier than dragging Drag end time point.In such a case, it is possible to determine needle in the corresponding user's watching behavior of this group of user's watching behavior data Adjustment watched time to the third group time point of target video is 1, wherein third group time point includes 00:10:12 to 00: The various time points of 15:32.
In step S1124, regarded according to target is directed in the corresponding user's watching behavior of this group of user's watching behavior data It is directed in the basic watched time of the various time points of frequency and the corresponding user's watching behavior of this group of user's watching behavior data The adjustment watched time of the various time points of target video determines the corresponding user's watching behavior of this group of user's watching behavior data In for target video various time points watched time.
In one possible implementation, according to needle in the corresponding user's watching behavior of this group of user's watching behavior data The corresponding user of basic watched time and this group of user's watching behavior data to the various time points of target video watches row For the adjustment watched time of the various time points of target video in, the corresponding user of this group of user's watching behavior data is determined It may include: by this group of user's watching behavior data for the watched time of the various time points of target video in watching behavior For the basic watched time and this group of user's watching behavior of the various time points of target video in corresponding user's watching behavior Adjustment watched time in the corresponding user's watching behavior of data for the various time points of target video is added, and obtains group use For the watched time of the various time points of target video in the corresponding user's watching behavior of family watching behavior data.
For example, the when a length of 00:45:23 of target video, a certain group of user's watching behavior data are directed to the sight of target video See that sart point in time is 00:03:30, this group of user's watching behavior data are 00 for the viewing ending time point of target video: 25:44, this group of user's watching behavior data are 00:15:32, this group of user's viewing for the dragging sart point in time of target video Behavioral data is 00:10:12 for the dragging end time point of target video, then can determine that dragging sart point in time is later than and drag Drag end time point.In such a case, it is possible to determine needle in the corresponding user's watching behavior of this group of user's watching behavior data Adjustment watched time to the third group time point of target video is 1, wherein third group time point includes 00:10:12 to 00: The various time points of 15:32.The each of target video will be directed in the corresponding user's watching behavior of this group of user's watching behavior data Target video is directed in the basic watched time at a time point user's watching behavior corresponding with this group of user's watching behavior data The adjustment watched times of various time points be added, the corresponding user's watching behavior of available this group of user watching behavior data In for target video 00:00:00 to the various time points between 00:00:29 watched time be 0, for target video 00:03:30 to the various time points between 00:10:11 watched time be 1, for each of 00:10:12 to 00:15:32 The watched time at a time point is 2, and the watched time for 00:15:33 to the various time points between 00:43:44 is 1, needle Watched time to 00:43:45 to the various time points between 00:45:23 is 0.
For another example, the when a length of 00:45:23 of target video, a certain group of user's watching behavior data are directed to the sight of target video See that sart point in time is 00:03:30, this group of user's watching behavior data are 00 for the viewing ending time point of target video: 25:44.This group of user's watching behavior data are 00:10:12 for first dragging sart point in time of target video, which uses Family watching behavior data are 00:15:32 for first dragging end time point of target video, then can determine first and drag Earlier than first dragging end time point of sart point in time is dragged, in the corresponding user's watching behavior of this group of user's watching behavior data Adjustment watched time for first third group time point of target video is -1, wherein first third group time point packet 00:10:12 is included to the various time points between 00:15:32.This group of user's watching behavior data are directed to second of target video Dragging sart point in time is 00:19:18, this group of user's watching behavior data are directed to second dragging end time of target video Point is 00:16:10, then can determine that second dragging sart point in time is later than second dragging end time point, this group of user For the adjustment viewing time at second third group time point of target video in the corresponding user's watching behavior of watching behavior data Number is 1, wherein second third group time point includes 00:16:10 to the various time points between 00:19:18.The group is used It is directed to the basic watched time of the various time points of target video in the corresponding user's watching behavior of family watching behavior data and is somebody's turn to do For the adjustment watched time of the various time points of target video in the corresponding user's watching behavior of group user's watching behavior data It is added, in the corresponding user's watching behavior of available this group of user watching behavior data extremely for the 00:00:00 of target video The watched time of various time points between 00:00:29 be 0, for target video 00:03:30 between 00:10:11 The watched time of various time points is 1, and the watched time for the various time points of 00:10:12 to 00:15:32 is 0, for The watched time of 00:15:33 to the various time points between 00:16:09 is 1, for 00:16:10 between 00:19:18 The watched time of various time points is 2, and the watched time for 00:19:19 to the various time points between 00:25:44 is 1, Watched time for 00:25:45 to the various time points between 00:45:23 is 0.
In alternatively possible implementation, according in the corresponding user's watching behavior of this group of user's watching behavior data The corresponding user's viewing of basic watched time and this group of user's watching behavior data for the various time points of target video For the adjustment watched time of the various time points of target video in behavior, the corresponding use of this group of user's watching behavior data is determined It may include: by this group of user's watching behavior number for the watched time of the various time points of target video in the watching behavior of family It watches and going with this group of user according to the basic watched time for the various time points for being directed to target video in corresponding user's watching behavior Subtract each other to be directed to the adjustment watched time of the various time points of target video in the corresponding user's watching behavior of data, obtains the group For the watched time of the various time points of target video in the corresponding user's watching behavior of user's watching behavior data.
Figure 11 shows an illustrative stream of the determination method and step S112 according to the video cover of one embodiment of the disclosure Cheng Tu.As shown in figure 11, step S112 may include step S1125 to step S1127.
It is corresponding in this group of user's watching behavior data for each group of user's watching behavior data in step S1125 There is no in the case where dragging behavior in user's watching behavior, the viewing start time in this group of user's watching behavior data is obtained Point and viewing ending time point.
In step S1126, determines in the corresponding user's watching behavior of this group of user's watching behavior data and regarded for target The watched time at first group of time point of frequency is 1, wherein at the end of first group of time point includes viewing start time point and viewing Between point between various time points.
In step S1127, determines in the corresponding user's watching behavior of this group of user's watching behavior data and regarded for target The watched time at second group of time point of frequency be 0, wherein second group of time point include viewing start time point before it is each when Between put and viewing ending time point after various time points.
For example, the when a length of 00:45:23 of target video, a certain group of user's watching behavior data are directed to the sight of target video See that sart point in time is 00:00:30, this group of user's watching behavior data are 00 for the viewing ending time point of target video: 43:44 can then determine first group that target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time at time point is 1, and can be determined in the corresponding user's watching behavior of this group of user's watching behavior data for mesh The watched time for marking second group of time point of video is 0, wherein first group of time point includes 00:00:30 between 00:43:44 Various time points, second group of time point include 00:00:00 between 00:00:29 various time points and 00:43:45 extremely Various time points between 00:45:23.
Figure 12 shows the block diagram of the determining device of the video cover according to one embodiment of the disclosure.As shown in figure 12, the dress Setting includes: the first determining module 21, the corresponding total watched time of various time points for determining target video;Second determines mould Block 22 determines the pass in the target video for the corresponding total watched time of the various time points according to the target video Key time point;Third determining module 23, for determining the target video according to the video frame near the material time point Cover.
Figure 13 shows the block diagram of the determining device of the video cover according to one embodiment of the disclosure.It is as shown in figure 13:
In one possible implementation, second determining module 22 includes: the first determining submodule 221, is used for Determine the maximum point in the corresponding total watched time of the various time points of the target video;Second determines submodule 222, uses According to the maximum point, the material time point in the target video is determined.
In one possible implementation, the third determining module 23 includes: the first screening submodule 231, is used for According to the image information of the video frame near the material time point, filtered out from the video frame near the material time point Multiple candidate video frames are selected for user;Third determines submodule 232, and the candidate video frame for selecting the user determines For the static cover of the target video.
In one possible implementation, the third determining module 23 includes: the second screening submodule 233, is used for According to the image information of the video frame near the material time point, multiple candidate views are filtered out near the material time point Frequency segment is selected for user;4th determines submodule 234, and the candidate video segment for selecting the user is determined as described The dynamic cover of target video.
In one possible implementation, the third determining module 23 includes: third screening submodule 235, is used for According to the image information of the video frame near the material time point, video is screened from the video near the material time point Frame;5th determines submodule 236, for the video frame filtered out to be determined as to the static cover of the target video.
In one possible implementation, the third determining module 23 includes: the 4th screening submodule 237, is used for According to the image information of the video frame near the material time point, video clip is screened near the material time point;The Six determine submodule 238, for the video clip filtered out to be determined as to the dynamic cover of the target video.
In one possible implementation, the image information of the video frame includes the clarity, right of the video frame Than one or more in degree, saturation degree and acutance.
In one possible implementation, first determining module 21 includes: acquisition submodule 211, for obtaining The corresponding user's watching behavior data of the target video, wherein each group of user's watching behavior data correspond to a user Watching behavior;7th determines submodule 212, for determining in the corresponding user's watching behavior of each group of user's watching behavior data For the watched time of the various time points of the target video;8th determines submodule 213, for according to the viewing time Number, determines the corresponding total watched time of the various time points of the target video.
In one possible implementation, the described 7th determine that submodule 212 includes: acquiring unit, for for every One group of user's watching behavior data, there are dragging behaviors in the corresponding user's watching behavior of this group of user's watching behavior data In the case of, it obtains the viewing start time point in this group of user's watching behavior data, viewing ending time point, pull the time started Point and dragging end time point;First determination unit, for according to the viewing start time point and the viewing ending time Point determines in the corresponding user's watching behavior of this group of user's watching behavior data for the various time points of the target video Basic watched time;Second determination unit, for determining according to the dragging sart point in time and the dragging end time point Adjustment in the corresponding user's watching behavior of this group of user's watching behavior data for the various time points of the target video is seen See number;Third determination unit, for according to being directed in the corresponding user's watching behavior of this group of user's watching behavior data The basic watched time of the various time points of target video and the corresponding user's watching behavior of this group of user's watching behavior data In for the target video various time points adjustment watched time, determine the corresponding use of this group of user's watching behavior data For the watched time of the various time points of the target video in the watching behavior of family.
In one possible implementation, first determination unit is used for: determining this group of user's watching behavior data Basic watched time in corresponding user's watching behavior for first group of time point of the target video is 1, wherein described First group of time point includes the various time points between the viewing start time point and viewing ending time point;Determining should It sees on the basis at second group of time point in the corresponding user's watching behavior of group user's watching behavior data for the target video See that number is 0, wherein second group of time point include the viewing start time point before various time points and described Various time points after viewing ending time point.
In one possible implementation, second determination unit is used for: being later than in the dragging sart point in time In the case where the dragging end time point, determine in the corresponding user's watching behavior of this group of user's watching behavior data for institute The adjustment watched time for stating the third group time point of target video is 1, wherein the third group time point includes that the dragging is opened Begin time point and the various time points pulled between end time point;In the dragging sart point in time earlier than the dragging In the case where end time point, determines in the corresponding user's watching behavior of this group of user's watching behavior data and regarded for the target The adjustment watched time at the third group time point of frequency is -1;Alternatively, being later than the dragging in the dragging sart point in time terminates In the case where time point, determine in the corresponding user's watching behavior of this group of user's watching behavior data for the target video The adjustment watched time at third group time point is -1;In the dragging sart point in time earlier than the feelings for pulling end time point Under condition, the third group time that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data is determined The adjustment watched time of point is 1.
In one possible implementation, the third determination unit is used for: by this group of user's watching behavior data pair Basic watched time in the user's watching behavior answered for the various time points of the target video is watched with this group of user goes It is added, obtains to be directed to the adjustment watched time of the various time points of the target video in the corresponding user's watching behavior of data For the viewing time of the various time points of the target video in the corresponding user's watching behavior of this group of user's watching behavior data Number;Alternatively, each time that the target video will be directed in the corresponding user's watching behavior of this group of user's watching behavior data For the target video in basic watched time user's watching behavior corresponding with this group of user's watching behavior data of point The adjustment watched time of various time points is subtracted each other, and obtains being directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time of the various time points of the target video.
In one possible implementation, it the described 7th determines that submodule 212 is used for: row is watched for each group of user It is obtained in the case where dragging behavior is not present in the corresponding user's watching behavior of this group of user's watching behavior data for data Viewing start time point and viewing ending time point in this group of user's watching behavior data;Determine this group of user's watching behavior number Watched time according to first group of time point for being directed to the target video in corresponding user's watching behavior is 1, wherein described the One group of time point includes the various time points between the viewing start time point and viewing ending time point;Determine the group For the watched time at second group of time point of the target video in the corresponding user's watching behavior of user's watching behavior data Be 0, wherein second group of time point include the viewing start time point before various time points and the viewing knot Various time points after beam time point.
The present embodiment passes through the corresponding total watched time of various time points for determining target video, according to each of target video A time point corresponding total watched time, determines the material time point in target video, and according to the view near material time point Frequency frame determines the cover of target video, and thus, it is possible to the corresponding total watched time of the various time points based on target video is flexibly true Set the goal the cover of video, human resources is greatly saved, and identified cover can more embody and attract user in target video Content.
Figure 14 is a kind of block diagram of the device 800 of determination for video cover shown according to an exemplary embodiment. For example, device 800 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, plate set It is standby, Medical Devices, body-building equipment, personal digital assistant etc..
Referring to Fig.1 4, device 800 may include following one or more components: processing component 802, memory 804, power supply Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, and Communication component 816.
The integrated operation of the usual control device 800 of processing component 802, such as with display, telephone call, data communication, phase Machine operation and record operate associated operation.Processing component 802 may include that one or more processors 820 refer to execute It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in device 800.These data are shown Example includes the instruction of any application or method for operating on device 800, contact data, and telephone book data disappears Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 may include power management system System, one or more power supplys and other with for device 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between described device 800 and user.One In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers Body component 808 includes a front camera and/or rear camera.When device 800 is in operation mode, such as screening-mode or When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike Wind (MIC), when device 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set Part 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented Estimate.For example, sensor module 814 can detecte the state that opens/closes of device 800, and the relative positioning of component, for example, it is described Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800 Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application specific integrated circuit (ASIC), number Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 804 of machine program instruction, above-mentioned computer program instructions can be executed above-mentioned to complete by the processor 820 of device 800 Method.
Figure 15 is a kind of block diagram of the device 1900 of determination for video cover shown according to an exemplary embodiment. For example, device 1900 may be provided as a server.Referring to Fig.1 5, device 1900 includes processing component 1922, further It, can be by processing component for storing including one or more processors and memory resource represented by a memory 1932 The instruction of 1922 execution, such as application program.The application program stored in memory 1932 may include one or one with On each correspond to one group of instruction module.In addition, processing component 1922 is configured as executing instruction, to execute above-mentioned side Method.
Device 1900 can also include that a power supply module 1926 be configured as the power management of executive device 1900, and one Wired or wireless network interface 1950 is configured as device 1900 being connected to network and input and output (I/O) interface 1958.Device 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 1932 of machine program instruction, above-mentioned computer program instructions can be executed by the processing component 1922 of device 1900 to complete The above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/ Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs, Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/ Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/ Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or lead this technology Other those of ordinary skill in domain can understand each embodiment disclosed herein.

Claims (28)

1. a kind of determination method of video cover characterized by comprising
Determine the corresponding total watched time of the various time points of target video;
According to the corresponding total watched time of the various time points of the target video, the material time in the target video is determined Point;
The cover of the target video is determined according to the video frame near the material time point.
2. the method according to claim 1, wherein the various time points according to the target video are corresponding total Watched time determines the material time point in the target video, comprising:
Determine the maximum point in the corresponding total watched time of the various time points of the target video;
According to the maximum point, the material time point in the target video is determined.
3. the method according to claim 1, wherein determining institute according to the video frame near the material time point State the cover of target video, comprising:
According to the image information of the video frame near the material time point, sieved from the video frame near the material time point Multiple candidate video frames are selected to select for user;
The candidate video frame that the user selects is determined as to the static cover of the target video.
4. the method according to claim 1, wherein determining institute according to the video frame near the material time point State the cover of target video, comprising:
According to the image information of the video frame near the material time point, multiple times are filtered out near the material time point Video clip is selected to select for user;
The candidate video segment that the user selects is determined as to the dynamic cover of the target video.
5. the method according to claim 1, wherein determining institute according to the video frame near the material time point State the cover of target video, comprising:
According to the image information of the video frame near the material time point, screened from the video near the material time point Video frame;
The video frame filtered out is determined as to the static cover of the target video.
6. the method according to claim 1, wherein determining institute according to the video frame near the material time point State the cover of target video, comprising:
According to the image information of the video frame near the material time point, piece of video is screened near the material time point Section;
The video clip filtered out is determined as to the dynamic cover of the target video.
7. the method according to any one of claim 3 to 6, which is characterized in that the image information packet of the video frame It includes one or more in clarity, contrast, saturation degree and the acutance of the video frame.
8. the method according to claim 1, wherein determining the corresponding total viewing of various time points of target video Number, comprising:
Obtain the corresponding user's watching behavior data of the target video, wherein each group of user's watching behavior data correspond to User's watching behavior;
Determine each time that the target video is directed in the corresponding user's watching behavior of each group of user's watching behavior data The watched time of point;
According to the watched time, the corresponding total watched time of the various time points of the target video is determined.
9. according to the method described in claim 8, it is characterized in that, determining the corresponding user of each group of user's watching behavior data For the watched time of the various time points of the target video in watching behavior, comprising:
For each group of user's watching behavior data, exist in the corresponding user's watching behavior of this group of user's watching behavior data In the case where dragging behavior, obtains the viewing start time point in this group of user's watching behavior data, viewing ending time point, drags It drags sart point in time and pulls end time point;
According to the viewing start time point and viewing ending time point, determine that this group of user's watching behavior data are corresponding For the basic watched time of the various time points of the target video in user's watching behavior;
According to the dragging sart point in time and the dragging end time point, determine that this group of user's watching behavior data are corresponding For the adjustment watched time of the various time points of the target video in user's watching behavior;
According to the various time points for being directed to the target video in the corresponding user's watching behavior of this group of user's watching behavior data Basic watched time and the corresponding user's watching behavior of this group of user's watching behavior data in for the target video The adjustment watched time of various time points determines in the corresponding user's watching behavior of this group of user's watching behavior data for described The watched time of the various time points of target video.
10. according to the method described in claim 9, it is characterized in that, being tied according to the viewing start time point and the viewing Beam time point, determine in the corresponding user's watching behavior of this group of user's watching behavior data for the target video it is each when Between the basic watched time put, comprising:
Determine first group of time that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The basic watched time of point is 1, wherein first group of time point includes that the viewing start time point terminates with the viewing Various time points between time point;
Determine second group of time that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data Point basic watched time be 0, wherein second group of time point include the viewing start time point before each time Various time points after point and viewing ending time point.
11. according to the method described in claim 9, it is characterized in that, being tied according to the dragging sart point in time and the dragging Beam time point, determine in the corresponding user's watching behavior of this group of user's watching behavior data for the target video it is each when Between the adjustment watched time put, comprising:
In the case where the dragging sart point in time is later than the dragging end time point, this group of user's watching behavior number is determined Adjustment watched time according to the third group time point for being directed to the target video in corresponding user's watching behavior is 1, wherein institute Stating third group time point includes the dragging sart point in time and the various time points pulled between end time point;Institute It states in the case where pulling sart point in time earlier than the dragging end time point, determines that this group of user's watching behavior data are corresponding Adjustment watched time in user's watching behavior for the third group time point of the target video is -1;
Alternatively,
In the case where the dragging sart point in time is later than the dragging end time point, this group of user's watching behavior number is determined Adjustment watched time according to the third group time point for being directed to the target video in corresponding user's watching behavior is -1;Described In the case where sart point in time is pulled earlier than the dragging end time point, the corresponding use of this group of user's watching behavior data is determined Adjustment watched time in the watching behavior of family for the third group time point of the target video is 1.
12. the method according to any one of claim 9 to 11, which is characterized in that according to this group of user's watching behavior It is used in the corresponding user's watching behavior of data for the basic watched time of the various time points of the target video and the group The adjustment watched time of the various time points of the target video is directed in the corresponding user's watching behavior of family watching behavior data, It determines in the corresponding user's watching behavior of this group of user's watching behavior data for the sight of the various time points of the target video See number, comprising:
The various time points of the target video will be directed in the corresponding user's watching behavior of this group of user's watching behavior data For each of the target video in basic watched time user's watching behavior corresponding with this group of user's watching behavior data The adjustment watched time at time point is added, and is obtained in the corresponding user's watching behavior of this group of user's watching behavior data for described The watched time of the various time points of target video;
Alternatively,
The various time points of the target video will be directed in the corresponding user's watching behavior of this group of user's watching behavior data For each of the target video in basic watched time user's watching behavior corresponding with this group of user's watching behavior data The adjustment watched time at time point is subtracted each other, and is obtained in the corresponding user's watching behavior of this group of user's watching behavior data for described The watched time of the various time points of target video.
13. according to the method described in claim 8, it is characterized in that, determining the corresponding use of each group of user's watching behavior data For the watched time of the various time points of the target video in the watching behavior of family, comprising:
For each group of user's watching behavior data, do not deposited in the corresponding user's watching behavior of this group of user's watching behavior data In the case where dragging behavior, viewing start time point and the viewing ending time in this group of user's watching behavior data are obtained Point;
Determine first group of time that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time of point is 1, wherein first group of time point includes the viewing start time point and the viewing ending time Various time points between point;
Determine second group of time that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data Point watched time be 0, wherein second group of time point include the viewing start time point before various time points with And the various time points after the viewing ending time point.
14. a kind of determining device of video cover characterized by comprising
First determining module, the corresponding total watched time of various time points for determining target video;
Second determining module determines the mesh for the corresponding total watched time of the various time points according to the target video Mark the material time point in video;
Third determining module, for determining the cover of the target video according to the video frame near the material time point.
15. device according to claim 14, which is characterized in that second determining module includes:
First determines submodule, the maximum in the corresponding total watched time of various time points for determining the target video Point;
Second determines submodule, for determining the material time point in the target video according to the maximum point.
16. device according to claim 14, which is characterized in that the third determining module includes:
First screening submodule, for the image information according to the video frame near the material time point, from it is described crucial when Between put near video frame in filter out multiple candidate video frames for user select;
Third determines submodule, and the candidate video frame for selecting the user is determined as the static envelope of the target video Face.
17. device according to claim 14, which is characterized in that the third determining module includes:
Second screening submodule, for the image information according to the video frame near the material time point, from it is described crucial when Between put nearby filter out multiple candidate video segments for user select;
4th determines submodule, and the candidate video segment for selecting the user is determined as the dynamic seal of the target video Face.
18. device according to claim 14, which is characterized in that the third determining module includes:
Third screens submodule, for the image information according to the video frame near the material time point, from it is described crucial when Between put near video in screen video frame;
5th determines submodule, for the video frame filtered out to be determined as to the static cover of the target video.
19. device according to claim 14, which is characterized in that the third determining module includes:
4th screening submodule, for the image information according to the video frame near the material time point, from it is described crucial when Between put nearby screen video clip;
6th determines submodule, for the video clip filtered out to be determined as to the dynamic cover of the target video.
20. device described in any one of 6 to 19 according to claim 1, which is characterized in that the image information of the video frame It is one or more in clarity, contrast, saturation degree including the video frame and acutance.
21. device according to claim 14, which is characterized in that first determining module includes:
Acquisition submodule, for obtaining the corresponding user's watching behavior data of the target video, wherein each group of user's viewing Behavioral data corresponds to user's watching behavior;
7th determines submodule, for determining in the corresponding user's watching behavior of each group of user's watching behavior data for described The watched time of the various time points of target video;
8th determines submodule, for determining that the various time points of the target video are corresponding total according to the watched time Watched time.
22. device according to claim 21, which is characterized in that the described 7th determines that submodule includes:
Acquiring unit is used for for each group of user's watching behavior data, in the corresponding user of this group of user's watching behavior data There are the viewing start time points in the case where dragging behavior, obtained in this group of user's watching behavior data, sight in watching behavior It sees end time point, pull sart point in time and pulls end time point;
First determination unit, for determining this group of user according to the viewing start time point and viewing ending time point For the basic watched time of the various time points of the target video in the corresponding user's watching behavior of watching behavior data;
Second determination unit, for determining this group of user according to the dragging sart point in time and the dragging end time point For the adjustment watched time of the various time points of the target video in the corresponding user's watching behavior of watching behavior data;
Third determination unit, for according in the corresponding user's watching behavior of this group of user's watching behavior data be directed to the target Needle in the basic watched time of the various time points of video and the corresponding user's watching behavior of this group of user's watching behavior data To the adjustment watched time of the various time points of the target video, determine that the corresponding user of this group of user's watching behavior data sees It sees in behavior for the watched time of the various time points of the target video.
23. device according to claim 22, which is characterized in that first determination unit is used for:
Determine first group of time that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The basic watched time of point is 1, wherein first group of time point includes that the viewing start time point terminates with the viewing Various time points between time point;
Determine second group of time that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data Point basic watched time be 0, wherein second group of time point include the viewing start time point before each time Various time points after point and viewing ending time point.
24. device according to claim 22, which is characterized in that second determination unit is used for:
In the case where the dragging sart point in time is later than the dragging end time point, this group of user's watching behavior number is determined Adjustment watched time according to the third group time point for being directed to the target video in corresponding user's watching behavior is 1, wherein institute Stating third group time point includes the dragging sart point in time and the various time points pulled between end time point;Institute It states in the case where pulling sart point in time earlier than the dragging end time point, determines that this group of user's watching behavior data are corresponding Adjustment watched time in user's watching behavior for the third group time point of the target video is -1;
Alternatively,
In the case where the dragging sart point in time is later than the dragging end time point, this group of user's watching behavior number is determined Adjustment watched time according to the third group time point for being directed to the target video in corresponding user's watching behavior is -1;Described In the case where sart point in time is pulled earlier than the dragging end time point, the corresponding use of this group of user's watching behavior data is determined Adjustment watched time in the watching behavior of family for the third group time point of the target video is 1.
25. the device according to any one of claim 22 to 24, which is characterized in that the third determination unit is used In:
The various time points of the target video will be directed in the corresponding user's watching behavior of this group of user's watching behavior data For each of the target video in basic watched time user's watching behavior corresponding with this group of user's watching behavior data The adjustment watched time at time point is added, and is obtained in the corresponding user's watching behavior of this group of user's watching behavior data for described The watched time of the various time points of target video;
Alternatively,
The various time points of the target video will be directed in the corresponding user's watching behavior of this group of user's watching behavior data For each of the target video in basic watched time user's watching behavior corresponding with this group of user's watching behavior data The adjustment watched time at time point is subtracted each other, and is obtained in the corresponding user's watching behavior of this group of user's watching behavior data for described The watched time of the various time points of target video.
26. device according to claim 21, which is characterized in that the described 7th determines that submodule is used for:
For each group of user's watching behavior data, do not deposited in the corresponding user's watching behavior of this group of user's watching behavior data In the case where dragging behavior, viewing start time point and the viewing ending time in this group of user's watching behavior data are obtained Point;
Determine first group of time that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data The watched time of point is 1, wherein first group of time point includes the viewing start time point and the viewing ending time Various time points between point;
Determine second group of time that the target video is directed in the corresponding user's watching behavior of this group of user's watching behavior data Point watched time be 0, wherein second group of time point include the viewing start time point before various time points with And the various time points after the viewing ending time point.
27. a kind of determining device of video cover characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to method described in any one of perform claim requirement 1 to 13.
28. a kind of non-volatile computer readable storage medium storing program for executing, is stored thereon with computer program instructions, which is characterized in that institute It states and realizes method described in claim 1 to 13 any one when computer program instructions are executed by processor.
CN201711353892.3A 2017-12-15 2017-12-15 The determination method and device of video cover Pending CN109936756A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711353892.3A CN109936756A (en) 2017-12-15 2017-12-15 The determination method and device of video cover

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711353892.3A CN109936756A (en) 2017-12-15 2017-12-15 The determination method and device of video cover

Publications (1)

Publication Number Publication Date
CN109936756A true CN109936756A (en) 2019-06-25

Family

ID=66980547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711353892.3A Pending CN109936756A (en) 2017-12-15 2017-12-15 The determination method and device of video cover

Country Status (1)

Country Link
CN (1) CN109936756A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324662A (en) * 2019-06-28 2019-10-11 北京奇艺世纪科技有限公司 A kind of video cover generation method and device
CN111078078A (en) * 2019-11-29 2020-04-28 深圳市咨聊科技有限公司 Video playing control method, device, terminal and computer readable storage medium
CN111078070A (en) * 2019-11-29 2020-04-28 深圳市咨聊科技有限公司 PPT video barrage play control method, device, terminal and medium
CN113507611A (en) * 2021-09-09 2021-10-15 深圳思谋信息科技有限公司 Image storage method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104410920A (en) * 2014-12-31 2015-03-11 合一网络技术(北京)有限公司 Video segment playback amount-based method for labeling highlights
CN104581400A (en) * 2015-02-10 2015-04-29 飞狐信息技术(天津)有限公司 Video content processing method and video content processing device
US20150235672A1 (en) * 2014-02-20 2015-08-20 International Business Machines Corporation Techniques to Bias Video Thumbnail Selection Using Frequently Viewed Segments
CN106503029A (en) * 2015-09-08 2017-03-15 纳宝株式会社 Extract and provide the method for excellent image, system and recording medium in video content
CN107147939A (en) * 2017-05-05 2017-09-08 百度在线网络技术(北京)有限公司 Method and apparatus for adjusting net cast front cover

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150235672A1 (en) * 2014-02-20 2015-08-20 International Business Machines Corporation Techniques to Bias Video Thumbnail Selection Using Frequently Viewed Segments
CN104410920A (en) * 2014-12-31 2015-03-11 合一网络技术(北京)有限公司 Video segment playback amount-based method for labeling highlights
CN104581400A (en) * 2015-02-10 2015-04-29 飞狐信息技术(天津)有限公司 Video content processing method and video content processing device
CN106503029A (en) * 2015-09-08 2017-03-15 纳宝株式会社 Extract and provide the method for excellent image, system and recording medium in video content
CN107147939A (en) * 2017-05-05 2017-09-08 百度在线网络技术(北京)有限公司 Method and apparatus for adjusting net cast front cover

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324662A (en) * 2019-06-28 2019-10-11 北京奇艺世纪科技有限公司 A kind of video cover generation method and device
CN111078078A (en) * 2019-11-29 2020-04-28 深圳市咨聊科技有限公司 Video playing control method, device, terminal and computer readable storage medium
CN111078070A (en) * 2019-11-29 2020-04-28 深圳市咨聊科技有限公司 PPT video barrage play control method, device, terminal and medium
CN113507611A (en) * 2021-09-09 2021-10-15 深圳思谋信息科技有限公司 Image storage method and device, computer equipment and storage medium
CN113507611B (en) * 2021-09-09 2021-12-31 深圳思谋信息科技有限公司 Image storage method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109089170A (en) Barrage display methods and device
CN109982142A (en) Video broadcasting method and device
CN108540845A (en) Barrage method for information display and device
CN109936756A (en) The determination method and device of video cover
CN109729435A (en) The extracting method and device of video clip
CN109257645A (en) Video cover generation method and device
CN106993229A (en) Interactive attribute methods of exhibiting and device
CN108093315A (en) Video generation method and device
CN107948708A (en) Barrage methods of exhibiting and device
CN108985176A (en) image generating method and device
CN108833939A (en) Generate the method and device of the poster of video
CN109963200A (en) Video broadcasting method and device
CN106960014A (en) Association user recommends method and device
CN110121106A (en) Video broadcasting method and device
CN106791535A (en) Video recording method and device
CN108924644A (en) Video clip extracting method and device
CN109063101A (en) The generation method and device of video cover
CN109302638A (en) Information processing method and device, electronic equipment and storage medium
CN107943550A (en) Method for showing interface and device
CN109286846A (en) Control method for playing back and device, electronic equipment and storage medium
CN108062364A (en) Information displaying method and device
CN107797741A (en) Method for showing interface and device
CN106599191A (en) User attribute analysis method and device
CN108259974A (en) Video matching method and device
CN109446346A (en) Multimedia resource edit methods and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200426

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 100080 Beijing Haidian District city Haidian street A Sinosteel International Plaza No. 8 block 5 layer D

Applicant before: YOUKU INFORMATION TECHNOLOGY (BEIJING) Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20190625

RJ01 Rejection of invention patent application after publication