CN108650524A - Video cover generation method, device, computer equipment and storage medium - Google Patents

Video cover generation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN108650524A
CN108650524A CN201810504021.5A CN201810504021A CN108650524A CN 108650524 A CN108650524 A CN 108650524A CN 201810504021 A CN201810504021 A CN 201810504021A CN 108650524 A CN108650524 A CN 108650524A
Authority
CN
China
Prior art keywords
image
video
cover
unforgettable
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810504021.5A
Other languages
Chinese (zh)
Other versions
CN108650524B (en
Inventor
费梦娟
高永强
谯睿智
戴宇荣
沈小勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810504021.5A priority Critical patent/CN108650524B/en
Publication of CN108650524A publication Critical patent/CN108650524A/en
Application granted granted Critical
Publication of CN108650524B publication Critical patent/CN108650524B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25875Management of end-user data involving end-user authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

This application discloses a kind of video cover generation method, device, computer equipment and storage medium, this method to include:Obtain the multiple image in video;The unforgettable degree scoring of the image is determined according to the characteristics of image for reflecting degree with deep impression in described image, the unforgettable degree scoring is for reflecting interest level of the user to image for every frame image;Unforgettable degree scoring based on the multiple image, selects target image of at least frame for generating video cover from the multiple image;Based on at least frame target image, the video cover of the video is generated.The scheme of the application is conducive to the Attraction Degree for improving video cover to user.

Description

Video cover generation method, device, computer equipment and storage medium
Technical field
This application involves technical field more particularly to a kind of video cover generation method, device, computer equipment and storages Medium.
Background technology
With the continuous development of Internet technology, more and more users like video distribution to the network platform (e.g., society Hand over platform or video distribution platform etc.) in, by video sharing to the other users in the network platform.
The network platform can first choose a frame image as the video before the video that publication user uploads from video Video cover, then publication have the video of the video cover.Wherein, video cover (the also referred to as cover of video of video Icon) as the mark for showing video content, importance is self-evident.However, the network platform is only by video at present First frame image is as video cover, or chooses from video a frame image at random as video cover, to be difficult to attract User pays close attention to, and causes video click rate low.
Invention content
In view of this, this application provides a kind of video cover generation method, device, computer equipment and storage medium, So that the video cover generated can more preferably reflect the interested content of user in the video, video cover is improved to user's Attraction Degree increases the clicking rate of video.
To achieve the above object, on the one hand, this application provides a kind of video cover generation methods, including:
Obtain the multiple image in video;
For every frame described image described image is determined according to the characteristics of image for reflecting degree with deep impression in described image The scoring of unforgettable degree, the unforgettable degree scores for reflecting interest level of the user to image;
Based on the multiple image unforgettable degree scoring, selected from the multiple image an at least frame for generate regard The target image of frequency cover;
Based on target image described in an at least frame, the video cover of the video is generated.
In one possible implementation, the unforgettable degree scoring of the determining described image, including:
The unforgettable degree model of image obtained using advance training, calculates the unforgettable degree scoring of described image, and described image is difficult Degree of forgetting model is to be trained using several sample images for being labeled with unforgettable degree scoring.
In one possible implementation, the multiple image obtained in video, including:
Obtain the video of video cover to be generated;
The video is split as continuous multiple video-frequency bands, each video-frequency band includes an at least frame image;
An at least frame image is selected from each video-frequency band as candidate cover, is obtained as the more of candidate cover Frame image.
Preferably, at least frame image that selected from each video-frequency band is used as candidate cover, including:
Calculate separately the clarity of each frame image in each video-frequency band;
An at least frame clarity is selected from each video-frequency band meets the image of preset condition as candidate cover.
Another aspect, present invention also provides a kind of video cover generating means, including:
Video acquisition unit, for obtaining the multiple image in video;
Image scoring unit, for for per frame described image, the image of degree with deep impression to be reflected in foundation described image Feature determines the unforgettable degree scoring of described image, and the unforgettable degree scoring is for reflecting interest level of the user to image;
Optical sieving unit is selected for the unforgettable degree scoring based on the multiple image from the multiple image An at least frame is used to generate the target image of video cover;
Cover generation unit, for based on target image described in an at least frame, generating the video cover of the video.
Another aspect, present invention also provides a kind of computer equipments, including:
Processor and memory;
Wherein, the processor is for executing the program stored in the memory;
For storing program, described program is at least used for the memory:
Obtain the multiple image in video;
For every frame described image described image is determined according to the characteristics of image for reflecting degree with deep impression in described image The scoring of unforgettable degree, the unforgettable degree scores for reflecting interest level of the user to image;
Based on the multiple image unforgettable degree scoring, selected from the multiple image an at least frame for generate regard The target image of frequency cover;
Based on target image described in an at least frame, the video cover of the video is generated.
It is executable to be stored with computer present invention also provides a kind of storage medium for another aspect in the storage medium Instruction when the computer executable instructions are loaded and executed by processor, realizes the video of any one embodiment of the application Cover generation method.
As it can be seen that in the embodiment of the present application, in getting video as the multiple image of candidate cover after, can distinguish Determine the unforgettable degree scoring of each image, and since the scoring of the unforgettable degree of image can be used for reflecting that user is emerging to the sense of the image Interesting degree, therefore, the unforgettable degree scoring based on the multiple image is chosen the target image for generating video cover, is conducive to From selecting the image that can more reflect user's content of interest in the video in video so that the video cover of generation is to user's Attraction Degree higher, and then improve the clicking rate of video.
Description of the drawings
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only embodiments herein, for ability For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to the attached drawing of offer other Attached drawing.
Fig. 1 shows that a kind of video cover of the embodiment of the present application generates the composition block schematic illustration of system;
Fig. 2 shows a kind of flow diagrams of video cover generation method one embodiment of the embodiment of the present application;
Fig. 3 shows an example of the selecting video cover from video in the embodiment of the present application;
Fig. 4 shows the schematic diagram of the unforgettable degree corresponding to the image of different conspicuousnesses;
Fig. 5 shows the schematic diagram of the corresponding unforgettable degree of multiple image of expression different emotions;
Fig. 6, which is shown, utilizes a kind of signal of the unforgettable degree model of several sample image training images in the embodiment of the present application Figure;
Fig. 7 shows a kind of flow diagram of the unforgettable degree model of training image in the embodiment of the present application;
Fig. 8 shows a kind of signal for application scenarios that a kind of video cover generation method of the embodiment of the present application is applicable in Figure;
Fig. 9 shows a kind of a kind of flow interaction schematic diagram of video cover generation method of the embodiment of the present application;
Figure 10 shows another application scenarios that a kind of video cover generation method of the embodiment of the present application is applicable in Schematic diagram;
Figure 11 shows that another application scenarios that a kind of video cover generation method of the embodiment of the present application is applicable in show It is intended to;
Figure 12 shows a kind of video cover generation method of the embodiment of the present application another flow interaction schematic diagram;
Figure 13 shows a kind of composed structure signal of video cover generating means one embodiment of the embodiment of the present application Figure;
Figure 14 shows a kind of composed structure schematic diagram of computer equipment of the embodiment of the present application.
Specific implementation mode
The video cover generation method of the application suitable for choosing image for generating video cover from video so that The image that must be selected can the interested content of user more in reflecting video, improve attraction journey of the video cover to user Degree.
Present inventor has found by research:User is bigger to the interest level of image, difficulty of the user to image Degree of forgetting is higher, therefore, can analyze interest level of the user to image by the unforgettable degree of image.Wherein, the difficulty of image Degree of forgetting indicates that image allows the impressive degree of people, it characterizes degree of the user to the interesting image.Based on this research hair Existing, the application can combine the unforgettable degree of image in video when choosing the image for generating video cover from video, with The unforgettable degree for improving the video cover generated, to improve interest level of the user to video cover.
The video cover generation method of the application can be adapted for the server in the network platform, and e.g., multi-media network is flat Server etc. in platform, with the automatic selecting video cover of video uploaded for user by server.The video cover of the application Generation method is also applied for terminal, e.g., mobile phone, tablet computer and laptop etc., in user by terminal to network While platform uploaded videos, select to select the image for being suitble to generate video cover in the video uploaded from user.
In order to make it easy to understand, first a kind of scene that the scheme of the application is applicable in is introduced.Such as, referring to Fig. 1, show A kind of composed structure schematic diagram that a kind of video cover of the application generates system is gone out.
Include in the system shown in figure 1:Server 20 in terminal 10 and the network platform, the terminal 10 and server 20 Between pass through network 30 realize communication connection.
The network platform can be social platform, multimedia platform etc..It may include one in the network platform or more Platform server, be in Fig. 1 illustrated by taking a server in the network platform as an example, but for the network platform include it is more The case where platform server, the operation all same in the multiple servers performed by any one.
Wherein, server 20 of the terminal 10 for into the network platform uploads video to be released.
Server 20 in the network platform is used to determine the video cover corresponding to the video that terminal uploads, and publication has The video of the video cover.
Wherein, in the case of the video cover of the not specified video of terminal, the server of the network platform is needed from this At least frame image for generating video cover is selected in video to be released, and utilizes at least frame image selected Generate the video cover of the video.
Video cover generation method is introduced from server side below, e.g., referring to Fig. 2, it illustrates a kind of videos of the application The flow diagram of cover generation method one embodiment, the method for the present embodiment may include:
S201 receives the video that terminal uploads.
Such as, terminal to server asks uploaded videos, and after the request that server agrees to terminal, terminal to server Transmit video to be released.
It is understood that step S201 and be server be video selecting video cover a necessary step, It is intended merely to facilitate the scheme for understanding the application, and is situated between with a kind of source situation of the video of video cover to be generated It continues.In practical applications, it can also be that the administrative staff of server side upload that the video of generation video cover is needed in server Or other network platforms be transferred to the server, do not limit herein.
S202 obtains the multiple image in the video.
The video is the video of video cover to be generated.
In one possible implementation, step S202 can be to determine possessed multiple image in video, with Convenient for selecting the image for generating video cover in multiple image.In this kind of situation, it is believed that being will be every in video Frame image is as the candidate image that can be used for generating video cover.Wherein, candidate cover refers to that can be used in giving birth in video At the image of video cover.
In the case of another is possible, in order to reduce data processing amount, while again being capable of more comprehensive reflecting video institute The content covered, server can extract parts of images as candidate cover in the video.Such as, it can be taken out from video at random Multiple image is taken out as candidate cover.
In view of abstract image is used as candidate cover from video at random, it is easy to lead to multiframe candidate cover in video Distributing position more concentrate so that the candidate cover selected can not more comprehensive reflecting video shown it is interior Hold.Such as, video includes 1000 frame images, but the candidate cover extracted may be focusing only between 10-100 frames so that is waited Select cover that can only reflect partial content in video, it is easy to omit some splendid contents, subsequently be selected from candidate cover in this way The video cover taken may also can not reflect excellent in video, the interested content of user.Optionally, in order to enable screening The candidate cover gone out can more fully reflect that video can be first split as continuously by the content that the video is shown, server Then multiple video-frequency bands select at least piece image as candidate cover from each video-frequency band.
Such as, video can be uniformly split as to multiple video-frequency bands, for example, the number of image frames phase for including in each video-frequency band Together, can be the difference of the video frame number of any two video-frequency band or in view of the case where multiple image can not be divided equally in video Value is up to one;Then, a frame image is selected respectively from each video-frequency band as candidate cover, to obtain as candidate The multiple image of cover.
Optionally, in order to ensure the clarity of video cover, after video is split as multiple video-frequency bands, can also divide The clarity of each frame image in each video-frequency band is not calculated, and at least frame clarity satisfaction is then selected from each video-frequency band The image of preset condition is as candidate cover.Wherein, which can be set as needed, and e.g., preset condition can be Clarity is more than predetermined threshold value, or clarity is most high in video-frequency band.Wherein, the mode for calculating image definition can be with There are many, the application does not limit the clarity which kind of mode to judge image using.
In order to make it easy to understand, may refer to Fig. 3, it illustrates an examples of the selecting video from video, in the Fig. 3 Video is uniformly split as to multiple video-frequency bands, each video-frequency band includes multiple image.Then it for each video-frequency band, is based respectively on The clarity of each frame image in the video-frequency band selects the highest image of a frame clarity from the video-frequency band and is sealed as candidate Face, to obtain multiframe candidate's cover.
It is understood that in conjunction with the clarity of image, candidate cover is screened from each video-frequency band respectively, can both be kept away Exempt from due to candidate cover integrated distribution and makes candidate cover more similar, it can not be comprehensively in reflecting video the case where content, again It can ensure that the clarity of the candidate cover filtered out is met the requirements, be conducive to the clarity for improving follow-up selecting video cover, And user's interest level.
S203 is determined according to the characteristics of image for reflecting degree with deep impression in described image for reflecting for every frame image User scores to the unforgettable degree of the interest level of the image.
Wherein, unforgettable degree of the unforgettable degree scoring of image for reflecting image, which can also be unforgettable Index is spent, can be an integer score value, or a probability value can also be a unforgettable degree grade, certainly, also Can be the other forms for characterizing unforgettable degree.
It is understood that certain features of image, usually it is easy to attractive, makes people with deep impression, therefore, by true The characteristics of image for reflecting degree with deep impression in image is made, and can be obtained based on these characteristics of image unforgettable corresponding to image Degree scoring.For example, with the presence or absence of personage or object in personage either the feature image of the target object of other settings in image Position feature, feature for the part that shows emotion in image etc. these can be as features that reflection stamp is spent deeply.Example Such as, compared to being in edge than objects such as personages or without the image of the objects such as personage, the objects such as personage are in intermediate figure As more unforgettable.
In this application, inventor it has been investigated that:The unforgettable degree and the popularity of image, saliency, figure of image As the content emotion etc. of expression all has relationship.Therefore, the stream of the conspicuousness of image, the emotion expressed by image and image Row all can serve as the characteristics of image for the reflection degree with deep impression having in image.Wherein, the conspicuousness of image is higher, image Unforgettable degree it is higher;The popularity degree of image is higher, and the unforgettable degree of image is higher;In the image to show emotion, expression is specific The unforgettable degree of image of the image of emotion than expressing other emotions is high.
Wherein, the conspicuousness of image represents visual attention, indicates the degree of image-region arresting power.And it comments The algorithm for sentencing saliency has very much, and the application does not limit this.For the ease of understanding conspicuousness and the unforgettable degree of image Between relationship, may refer to Fig. 4, it illustrates the unforgettable degree of image corresponding to the multiple image with different conspicuousnesses.In Fig. 4 In in three width images from left to right, personage is in the center of the image in piece image;Personage is in second width image The right side of the image;And there is no personage in third width image.And this conspicuousness of three width image from left to right reduces successively, wherein The conspicuousness highest of piece image obtains the unforgettable degree of the piece image also highest correspondingly, by largely testing.By The unforgettable degree that Fig. 4 can be seen that piece image is 0.751, and the unforgettable degree of the second width image is 0.39, third width image Unforgettable degree is 0.241.
For another example, the image of the more intense emotion such as sad, surprised, angry is expressed, the emotions such as can meet, revere than expressing Image is more unforgettable.Referring to Fig. 5, it illustrates the unforgettable degree of the multiple image of expression different emotions.In Figure 5 from left to right Three width images in, in piece image personage give expression to indignation look;What personage expressed in the second width image is sad Look;And in third width image personage express be meet look.Correspondingly, through a large amount of tests, piece image it is unforgettable Highest is spent, unforgettable degree is 0.95;And the unforgettable degree of the second width image is 0.88;The unforgettable degree of third width image is minimum, difficult Degree of forgetting is 0.79.
For another example, the popularity of image can reflect that the number that image is liked by user, recommends, browsing in social networks is big It is small.Image is browsed by user, the number of recommendation is higher, and it is more popular to represent the image.The popularity of image can be by social activity User carries out the mode such as counting and determine for the operation behaviors such as recommendation, browsing of the image in network, the popularity of image and The relationship of unforgettable degree is similar to several situations in front, repeats no more.
By being analyzed above it is found that according to affective characteristics etc. described in the conspicuousness of image or image, can reflect The unforgettable degree of the image, therefore can be by analyzing multiple dimensions such as the conspicuousness of image and the affective characteristics of image expression Characteristics of image, come determine image unforgettable degree scoring.Such as, the various features dimension for the image for needing to analyze can be set, e.g., The various features dimension may include:Emotion etc. expressed by the conspicuousness of image, image, e.g., the different emotions of image expression Emotion scoring of corresponding different emotion dimension etc., then, different weights is arranged for different dimensions, by commenting for each dimension Divide and be weighted summation, to determine the unforgettable degree of image.
Optionally, it scores in order to the unforgettable degree of more convenient, quick determining image, can train in advance and obtain figure As unforgettable degree model, the unforgettable degree model of the image can train to obtain using several sample images with unforgettable degree scoring 's.The unforgettable degree model of image can will characterize the characteristics of image of degree with deep impression in image, be converted to unforgettable degree and score and export. Correspondingly, the unforgettable degree model of image that the advance training can be utilized to obtain, calculates the respective unforgettable degree scoring of every frame image. Such as, it will be input in the unforgettable degree model of the image per frame image, and obtain each figure of the unforgettable degree model output of the image respectively The unforgettable degree of picture scores.
Wherein, can have in the way of several unforgettable degree models of sample image training image with unforgettable degree scoring more Kind, e.g., the sample image of unforgettable degree scoring is labeled with based on several, not to deep learning network or convolutional neural networks model Disconnected training, and the network model that final training obtains is determined as the unforgettable degree model of image.
In order to make it easy to understand, such as, being situated between for training training deep learning network model by several sample images It continues.Such as, referring to Fig. 6, it illustrates the exemplary plots that deep learning network is trained by several sample images, as seen from the figure, Several sample images for being labeled with unforgettable degree scoring are input to deep learning network to be trained, then, by deep learning The unforgettable degree scoring of each width sample image of network output is scored with the practical unforgettable degree being marked of each width sample image and is carried out Compare, and constantly adjusts the deep learning network, the unforgettable degree scoring for the sample image that can be finally exported and sample The actual unforgettable degree of image scores similar deep learning network, that is, obtains the unforgettable degree model of image.In conjunction with the example of Fig. 6, ginseng See Fig. 7, it illustrates the flow diagram being trained to deep learning network by several sample images, which can wrap It includes:
S701, obtains several sample images, and every width sample image is labeled with a unforgettable degree scoring.
Wherein, the unforgettable degree scoring of sample image can in advance be marked by manually.Such as, it is tested, is come by a large number of users Unforgettable degree to every width sample image scores.For another example, by be manually empirically for each width sample image be respectively set it is unforgettable degree comment Point.
It is understood that due to the difference of sample image, the unforgettable degree scoring of different sample images would also vary from.
Several sample images are input to deep learning network to be trained by S702, obtain deep learning network output Every width sample image unforgettable degree scoring.
The deep learning network can there are many possibility, can be light-type neural network, for example, MobileNet e.g. Deng.
S703, the unforgettable degree scoring respectively marked based on several sample images and deep learning network output The unforgettable degree of the multiple image scores, and determines the accuracy of the unforgettable degree scoring of the deep learning neural network forecast image.
It is understood that deep learning network can estimate out the unforgettable degree scoring of each image, in order to verify depth Whether the unforgettable degree scoring that learning network is estimated out is accurate, needs the difficulty for each width sample image for estimating out deep learning network Degree of forgetting scores to be compared with the practical unforgettable degree scoring being marked of the sample image.And the unforgettable degree scoring estimated out and reality The degree differed between the unforgettable degree scoring of mark can reflect the unforgettable degree scoring of the deep learning neural network forecast image Accuracy.Such as, can by loss function-intersection entropy function come compare estimate out unforgettable degree scoring with actually mark it is unforgettable Difference degree between degree scoring.
Certainly, the accuracy for judging the unforgettable degree scoring of the deep learning neural network forecast image by other means is similarly suitable For the present embodiment.
S704, judges whether the accuracy of the unforgettable degree scoring of the deep learning neural network forecast image meets preset requirement, such as Fruit is that current deep learning network is then determined as the unforgettable degree model of image, terminates training;If not, adjusting the depth Practise the parameter value of parameter in network, and return to step S702.
It is differed between the unforgettable degree scoring of sample image and the practical unforgettable degree scoring marked of sample image for example, estimating out Degree meet preset extent of deviation, then can determine that accuracy meets preset requirement.
It should be noted that Fig. 6 is only a simplified example of training deep learning network, in practical applications, lead to It crosses during sample image trains the deep learning network, is often once trained, it is also necessary to by several for test Sample image tests deep learning network, and final binding test is as a result, from repeatedly trained deep learning network Determine final required model.
Certainly, Fig. 6 and Fig. 7 is only a kind of possible situation that training obtains the unforgettable degree model of image, for utilizing several Sample image is by other means trained network model, also same with the network model for obtaining to assess the unforgettable degree of image Sample is suitable for the present embodiment, does not limit herein.
S204, the unforgettable degree scoring based on the multiple image, selects an at least frame for generating from the multiple image The target image of video cover.
Wherein, it scores according to the respective unforgettable degree of the multiple image as candidate cover, can be conducive to select unforgettable Relatively high image is spent as the image for generating video cover.
Such as, the sequence that can be scored from high to low according to the unforgettable degree of the multiple image, selects from the multiple image Sort a forward at least frame target image.
It can be such as directed to per frame after selecting a frame image in each video-frequency band as candidate cover referring to Fig. 3 The unforgettable degree of candidate cover scores, and the highest candidate cover of unforgettable degree scoring is selected from multiframe candidate's cover as video Cover.
S205 generates the video cover of the video based on at least frame target image selected.
It is understood that the type of video cover can be divided into two kinds of static video cover and dynamic video cover.For It is easy to understand, for both video covers, several situations to generate video cover are introduced.Wherein, in required life At video cover be static video cover in the case of, can be from multiple image, select unforgettable degree score it is highest Target image, and generate using the target image the static video cover of the video.Such as, the target image selected is determined as Static video cover either carries out particular procedure on the target image selected, for example, plus specific title or Illustrate, to using treated target image as the static video cover of video.It is of course also possible to be to select multiple hardly possiblies Degree of forgetting scores forward target image, then synthesizes a video cover using this multiple target image.
The case where being dynamic video cover for the video cover for needing to generate, in a kind of possible mode, Ke Yishi First from image of the multiframe as candidate cover, selects the highest image of unforgettable degree scoring and here will for the ease of distinguishing The highest image of unforgettable degree scoring is known as benchmark image;Then, in the video-frequency band belonging to the benchmark image, it includes to be somebody's turn to do to select Continuous multiple frames image including benchmark image is as the target image for generating dynamic video cover, correspondingly, can utilize The continuous multiple frames target generates animation, using the animation as dynamic video cover.For example, selecting unforgettable degree from candidate cover After highest image, can from the video-frequency band belonging to the image, before choosing the image 10 nearest frame images and should 11 nearest images after image, to obtain 11 frame images, and the action generated using this 11 frame image is as dynamic vision Frequency cover.
The case where for generating dynamic video cover, can be from as candidate cover in another possible mode In multiple image, target image of the multiframe for generating video cover is selected, e.g., unforgettable degree is selected and is scored above default threshold The multiframe target image of value or the forward multiframe target image of unforgettable degree marking and queuing;Then, the multiframe target figure is utilized As generating the animation as dynamic video cover.
It is understood that after selecting the target image for generating video cover, regarded for different types of Frequency cover, can be there are ways to generate video cover, and the application does not limit this.
It should be noted that after server selects at least frame target image for generating video cover, it is raw Can be completed by the server at the video cover, can also be that the server is completed by other servers or equipment, The application is not for limiting.And the process i.e. step S205 of video cover is generated based on the target image selected, only It is for the ease of understanding the generating process of entire video cover, and is not selecting video cover the step of having to carry out.
As it can be seen that in the embodiment of the present application, server is got in the video of video cover to be generated as candidate cover Multiple image after, can determine respectively each image unforgettable degree scoring, and due to the unforgettable degree of image scoring can be used for Reflect interest level of the user to the image, therefore, the unforgettable degree scoring based on the multiple image is chosen for generating video The target image of cover is conducive to from the image that can more reflect user's content of interest in the video is selected in video as regarding Frequency cover, so that Attraction Degree higher of the video cover generated to user, and then be conducive to improve the video cover generated Clicking rate.
Simultaneously as user to the excellent degree of the interest level of image and image there is also positively related relationship, because This, by the scheme of the application while selecting the interested image of user from video and being used as video cover, actually It is also beneficial to video cover of the image more excellent in selecting video as video, to be conducive to improve the suction of video cover Degree of drawing.
It is understood that the video cover generation method of the application can be applied to realize a variety of applications of video distribution In scene.In order to make it easy to understand, below by taking a kind of application scenarios as an example, the process of video cover is chosen and generated to server side It is introduced.
Such as, referring to a kind of example of Fig. 8 application scenarios being applicable in it illustrates the video cover generation method of the application Figure.As shown in Figure 8, be in the application scenarios network platform be video distribution platform for.Terminal 10 can be sent out to the video Server uploads the video A for needing to issue in cloth platform.Figure in the terminal and not specified video A as video A covers Picture.
Correspondingly, the server 20 of video distribution platform is after receiving video A, based on can be used as time in video A The respective unforgettable degree scoring of the image of cover is selected, the target figure of video cover of at least frame for generating video A is selected Picture, and generate using the target image that selects the video cover a of video A;Then, which takes same be somebody's turn to do by video A The video cover a of video A is stored to shared memory.
Wherein, the video cover a of video A can static video cover, can also be dynamic video cover.
The memory block that the shared memory accesses for different terminals, to store the video of different user publication, this is shared Memory block may be considered a part for the memory block of the server 20, can also be independently of other except the server 20 Storage region in storage device.
The user of terminal accesses the shared memory, it can be seen that user has all users publication of access authority range The video of (e.g., can be the user oneself publication, can also include that other users are issued).
In conjunction with the application scenarios of Fig. 8, memory space is shared by the individual that terminal server is user distribution with user It is introduced for middle publication video, referring to Fig. 9, it illustrates a kind of another embodiments of video cover generation method of the application Flow interact schematic diagram.The method of the present embodiment may include:
S901, terminal send user's logging request to the server of video distribution platform.
User's logging request can carry the user identifier and identifying code of user.Such as, user identifier can be to be somebody's turn to do The user name of user, the identifying code can be login password.
S902, server are completed user and are logged in response to user's logging request, and after verification user identity passes through.
Such as, the user name of server authentication user is consistent with login password, then allows the user to log in, to establish server With the connection of terminal so that user can pass through terminal login service device.
Such as, by taking terminal is the client of instant messaging as an example, user by the terminal can with log-in instant communication server, Memory space, the circle of friends being such as commonly called as or personal empty are shared to access the individual that the instant communication server is user distribution Between etc..
Step S901 and S902 is simultaneously not belonging to step necessary to terminal to server publication video, it is only for just In completion understanding scheme, and it is introduced by taking a kind of scene as an example.
S903, terminal to server send video distribution request, the video distribution request carry video to be released with And the user identifier of the user.
Such as, video distribution request shares the individual of video distribution to the user in memory space for asking, so that The other users of the shared memory space of individual of the user, which must be accessed, can watch the video.For example, user is shared to individual Memory space issues small video, is watched in order to which small video is shared with other people by user.Wherein, small video typically refers to duration Less than the video of specific duration (being, for example, less than three minutes).
Certainly, step S903 is only to be introduced by taking a kind of video distribution scene as an example, for other video distributions Scene be applied equally to the present embodiment.
The video is split as continuous multiple video-frequency bands by S904, server.
Wherein, each video-frequency band includes an at least frame image.
Optionally, according to the length of different video, video is split.Wherein, the length of the video-frequency band split out can be identical, It can also be different.Such as, according to the length of video, video is split as the same or similar multiple video-frequency bands of length.For example, can Video is split as 10 video-frequency bands, the frame number of image is identical in each video-frequency band.
S905, server calculate separately the clarity of each frame image in each video-frequency band.
S906, server select the highest image of a frame clarity from each video-frequency band and are used as candidate cover respectively, Obtain multiframe candidate's cover.
In the present embodiment, it is to select a highest image of frame clarity as illustrating for candidate cover, still A frame either multiframe or the clarity based on image that clarity is more than predetermined threshold value are chosen, chooses wait by other means Cover is selected to be applied equally to the present embodiment.
S907, the server by utilizing unforgettable degree model of image that training obtains in advance, calculates the unforgettable degree per frame candidate's cover Scoring.
The unforgettable degree model of the image is to train to obtain to network model using several sample images with unforgettable degree scoring 's.
S908, for server from multiframe candidate's cover, selecting the highest candidate cover conduct of the unforgettable degree scoring of a frame should The static video cover of video.
It is to generate static video cover, and to choose the unforgettable highest candidate's cover of degree scoring in the embodiment of the present application It, can be with due to only selecting the highest candidate cover of a frame unforgettable degree scoring for static video cover The candidate cover selected is directly determined as static video cover, without carrying out subsequent processing again.But it is understood that It is in practical applications, static video cover to be generated for choosing multiframe candidate's cover, and by handling multiframe candidate's cover Or the process of dynamic video cover is applied equally to the present embodiment, does not limit herein.
It is understood that the specific implementation of step S904 to S908 may refer to the related introduction of preceding embodiment, This is repeated no more.
The video is stored to the user in shared memory and is corresponded to according to the user identifier of the user by S909, server Individual share memory space, and the static video cover of the video that display is selected is set.
Video storage is shared into memory space to the individual, and the cover that the video is arranged is that the static state selected regards After frequency cover, the publication of the video is just completed, correspondingly, the shared storage of individual that the user and having accesses the user is empty Between other users can access the individual of the user and share memory space, to the video for watching the user to issue Static video cover.
S910, server are used to indicate successful issue of video distribution to terminal return and successfully prompt.
It should be noted that step S909 and step S910 is optional step, it is only that server is selected and is used for After the image for generating video cover, a kind of possible processing mode.In practical applications, server is being selected for generating After the target image of video cover, the target image that user can also be generated to video cover is indicated to user, with by user A width is specified from this at least a frame target image, and either several target images generate static video cover or dynamic video Cover.
Such as, referring to Figure 10, another application scenarios for being applicable in it illustrates the video cover generation method of the application Exemplary plot selects from video at least frame target figure for generating video cover with server in the example in Figure 10 As after, by this, at least a frame target image recommends user, finally to select video cover by user.
As shown in Figure 10, in step slo, terminal to server sends video to be released;
In step s 11, server selects the forward at least frame frame target figure of unforgettable degree marking and queuing from video Picture e.g. can select multiframe target image.The process that the server chooses target image may refer to 2 embodiment of prior figures Related introduction, or referring to the related introduction of step S904 to S908 in Fig. 9 embodiments, only server can select One frame or multiframe are used to generate the target image of video cover.
In step S12, server pushes away at least width target image that can be used for generating video cover selected It recommends to terminal, select an at least width from an at least width target image for recommendation using the user of instruction terminal seals as video Face.
In step S13, terminal notifies the video cover that user selects to server.Such as, server recommends use Three width target image of family, user have selected width target image therein as video cover, then the work that terminal selects user Mark for the target image of video cover is sent to server.
In step S14, the video cover that server selects user regards this as the video cover of the video Frequency cover is published to shared memory together with the video.
In conjunction with the scheme of the application above example, present inventor multiple is neglected by what is issued in opposite platform Frequency is tested, and the small video of test includes the various living scenes such as user's self-timer, party, cuisines, indoor and outdoor, movement Video.In the small video of these users shooting, personage is often the subject being taken, but can wherein be mingled with other Various contents.The scheme of the application object centered on personage is used, is scored based on unforgettable degree, the selection image conduct from video Cover.The video cover that the scheme of the application will be used to be generated for multiple small video, with using it is existing randomly select it is equal just The video cover comparison that formula is determined is apparent that:Using the video cover of the schemes generation of the application excellent degree and Clarity higher can obtain better effect.
It is understood that in the video cover selection method of above example being selected from video with server An at least frame is introduced for the target image for generating video cover.But it is understood that in terminal to service Before device uploads video to be released, terminal can also first determine at least frame mesh for generating video cover from video Logo image, the target image for being then based on selection generates video cover, alternatively, by at least frame for generating video cover The information of target image and the transmission of video to be released are to server, so that server issues the video, and utilizing should At least a frame target image generates the cover of the video.
Such as, referring to Figure 11, it illustrates example of the video cover generation method of the application in another application scenarios Figure.As seen from Figure 11, it in the application scenarios, after terminal 10 gets video to be released, can be chosen from video Go out an at least frame image of the video cover either for generating video cover and by the letter of the video cover or image that select Breath and the transmission of video are to server 20.
In conjunction with Figure 11, referring to Figure 12, it illustrates the flows of another embodiment of the video cover generation method of the application Interaction schematic diagram, the method for the present embodiment may include:
S1201, terminal determine video to be released.
Such as, terminal receives the video to be released that user selects.
The video is split as continuous multiple video-frequency bands by S1202, terminal.
S1203, terminal calculate separately the clarity of each frame image in each video-frequency band.
The step is optional step, under the premise of being satisfied by requirement per the clarity of frame image in thinking video-frequency band, or Under the premise of person does not consider clarity, step S1203 can not also be executed, and is directly randomly selected out from each video-frequency band One frame or multiple image are as candidate cover.
S1204, terminal select the image work that an at least frame clarity meets preset condition from each video-frequency band respectively For candidate cover, multiframe candidate's cover is obtained.
Such as, the highest image of a frame clarity is selected from each video-frequency band as candidate cover.
The specific operation process that end side executes step S1202 to step S1204 may refer to front server side and hold The process of row relevant operation is similar, specifically may refer to the related introduction of front, details are not described herein.
It is obtained in the video as candidate cover it should be noted that step S1202 to step S1204 is only terminal Multiple image a kind of realization method, in practical applications, terminal can also be that every frame image in video is all used as candidate Cover.It is, of course, also possible to there is other modes, before server side obtain the specific of the multiple image in video as candidate's cover Mode is equally applicable to end side, and details are not described herein.
S1205, terminal utilize the preset unforgettable degree model of image, calculate separately the unforgettable degree scoring of every frame candidate cover.
The step is only a kind of realization method for the unforgettable degree scoring for calculating candidate cover, true for front server side The fixed mode per frame as the unforgettable degree scoring of the image of candidate cover is applied equally to end side determination per frame candidate's cover Unforgettable degree scoring, specifically may refer to front related introduction, details are not described herein.
S1206, terminal select unforgettable degree and score highest frame candidate's cover as regarding from multiframe candidate's cover Frequency cover.
Step S1206 is only a kind of realization method of the selecting video cover from candidate cover, in practical applications, Unforgettable degree scoring of the terminal based on candidate cover, can also be to choose the unforgettable frame candidate's cover spent and be scored above predetermined threshold value As video cover, it is, of course, also possible to there is other modes selecting video cover from candidate cover, do not limit herein.
Step S1206 is to select a frame candidate cover as being introduced for video cover, in this kind using terminal In situation, the video cover that terminal is chosen is actually the static video cover of the video.In practical applications, terminal also may be used From multiframe candidate's cover, to choose multiframe candidate cover as video cover.Either, unforgettable degree is selected in terminal to comment Divide after highest frame candidate's cover, can also be carried from video-frequency band belonging to the unforgettable highest candidate cover of degree scoring It takes out with the unforgettable highest candidate cover of degree scoring apart from nearest multiple image as video cover.Certainly, for front Unforgettable degree scoring based on multiframe as the image of candidate cover, selects target figure of at least frame for generating video cover Other realization methods of picture are applied equally to the present embodiment, and details are not described herein.
It is understood that in the case of terminal selects target image of the multiframe for generating video cover, terminal The multiframe target image selected can also be recommended user, and finally select the figure needed as video cover by user Picture.
S1207, terminal is by the mark of the video cover and the transmission of video to server.
Such as, terminal by the transmission of video to server while, indicate the image as video cover in the video Number of frames, the image of video cover is chosen in the video so that server determines.
S1208, the video of server publication with the video cover.
Wherein, in the case where the video cover of video determines, server issue the video mode can there are many, Such as, server can be by the video and the video cover associated storage to shared memory etc..
In step S1208 it is introduced so that terminal directly selects video cover as an example, in practical applications, eventually End can also select a frame or multiframe target image for generating video cover, then by a frame or multiframe target figure As being indicated to server, to be based on a frame or the generation of multiframe target image either statically or dynamically video cover by server;Or Person is to be transferred to server after generating either statically or dynamically video cover by using a frame or multiframe target image.
A kind of video cover generation method of corresponding the application, the embodiment of the present application also provides a kind of generations of video cover Device.Such as, referring to Figure 13, it illustrates a kind of signals of the composed structure of video cover generating means one embodiment of the application Figure, the device of the present embodiment can be applied to computer equipment, which can be above-mentioned server, also may be used Think above-mentioned terminal.The device of the present embodiment may include:
Video acquisition unit 1301, for obtaining the multiple image in video;
Image scoring unit 1302, for for per frame described image, reflecting degree with deep impression in foundation described image Characteristics of image determines the unforgettable degree scoring of described image, and the unforgettable degree scoring is for reflecting interested journey of the user to image Degree;
Optical sieving unit 1303 is selected for the unforgettable degree scoring based on the multiple image from the multiple image Take out the target image that an at least frame is used to generate video cover;
Cover generation unit 1304, for based on target image described in an at least frame, generating the video cover of the video.
In one possible implementation, described image scoring unit, including:
Image scores subelement, for for per frame described image, the unforgettable degree model of image obtained using advance training, The unforgettable degree scoring of described image is calculated, the unforgettable degree model of described image is to utilize several sample graphs for being labeled with unforgettable degree scoring As training obtains.
Optionally, which can also include:Model training unit is used for, and training in the following way obtains described image Unforgettable degree model:
Several sample images are obtained, every width sample image is labeled with a unforgettable degree scoring;
Several sample images are input to deep learning network to be trained, obtain what the deep learning neural network forecast went out The unforgettable degree of sample image described in every width scores;
The institute of the scoring of unforgettable degree and deep learning network output that are respectively marked based on several described sample images The unforgettable degree scoring for stating multiple image determines the accuracy of the unforgettable degree scoring of the deep learning neural network forecast image;
When the accuracy is unsatisfactory for preset requirement, then the parameter value of parameter in the deep learning network is adjusted, and It returns and executes the operation that several sample images are input to deep learning network to be trained, until the accuracy meets Preset requirement.
In one possible implementation, the video acquisition unit, including:
Video acquisition subelement, the video for obtaining video cover to be generated;
Video splits subelement, and for the video to be split as continuous multiple video-frequency bands, each video-frequency band includes An at least frame image;
Image candidate subelement is used as candidate cover for selecting an at least frame image from each video-frequency band, Obtain the multiple image as candidate cover.
Further, described image candidate subelement may include:
Sharpness computation subelement, the clarity for calculating separately each frame image in each video-frequency band;
First candidate subelement, meets preset condition for selecting an at least frame clarity from each video-frequency band Image as candidate cover.
In one possible implementation, described image screening unit, including:
First screening subelement, for from the multiple image, selecting the highest target image of unforgettable degree scoring;
The cover generation unit, including:
First generates subelement, the static video cover for generating the video using the target image.
In another possible realization method, described image screening unit may include:
Second screening subelement, for from the multiple image, selecting the highest benchmark image of unforgettable degree scoring;
Third screens subelement, in the video-frequency band belonging to the benchmark image, selecting comprising the reference map Continuous multiple frames image as including is as the target image for generating dynamic video cover.
On the other hand, present invention also provides a kind of computer equipment, which can be above-mentioned clothes Business device, alternatively, being above-mentioned terminal.Such as, referring to Figure 14, it illustrates a kind of a kind of composition knots of computer equipment of the application Structure schematic diagram.
As seen from Figure 14, which includes at least:Processor 1401 and memory 1402.
The processor 1401 can be central processing unit (Central Processing Unit, CPU), specific application collection At circuit, digital signal processor, ready-made programmable gate array or other programmable logic device etc..
Wherein, the processor is for executing the program stored in the memory;
For storing one or more than one program in memory 1402, program may include program code, the journey Sequence code includes computer-managed instruction.
In the embodiment of the present application, the program for realizing following functions is at least stored in the memory:
Obtain the multiple image in video;
For every frame described image described image is determined according to the characteristics of image for reflecting degree with deep impression in described image The scoring of unforgettable degree, the unforgettable degree scores for reflecting interest level of the user to image;
Based on the multiple image unforgettable degree scoring, selected from the multiple image an at least frame for generate regard The target image of frequency cover;
Based on target image described in an at least frame, the video cover of the video is generated.
In one possible implementation, which may include storing program area and storage data field, wherein Storing program area can be needed for storage program area and at least one function (such as image player function etc.) application program Deng;Storage data field can store the data created during the use according to computer, for example, score data and model Deng.
The memory 1402 may include high-speed random access memory, can also include nonvolatile memory, such as At least one disk memory or other volatile solid-state parts.
Optionally, which can also include that communication interface 1403, input unit 1404 and display 1405 and communication are total Line 1406.
Processor 1401, communication interface 1403, input unit 1404, display 1405, passes through communication at memory 1402 Bus 1406 completes mutual communication.
Certainly, the structure of terminal shown in Figure 14 does not constitute the restriction to terminal in the embodiment of the present application, is actually answering May include than more or fewer components shown in Figure 14, or the certain components of combination with middle terminal.
On the other hand, present invention also provides a kind of storage medium, it is stored with computer program in the storage medium, it is described When computer program is loaded and executed by processor, the video cover generation side described in as above any one embodiment is realized Method.
It should be noted that each embodiment in this specification is described in a progressive manner, each embodiment weight Point explanation is all difference from other examples, and the same or similar parts between the embodiments can be referred to each other. For device class embodiment, since it is basically similar to the method embodiment, so fairly simple, the related place ginseng of description See the part explanation of embodiment of the method.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment including a series of elements includes not only that A little elements, but also include other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or equipment including element.
The foregoing description of the disclosed embodiments enables those skilled in the art to realize or use the present invention.To this A variety of modifications of a little embodiments will be apparent for a person skilled in the art, and the general principles defined herein can Without departing from the spirit or scope of the present invention, to realize in other embodiments.Therefore, the present invention will not be limited It is formed on the embodiments shown herein, and is to fit to consistent with the principles and novel features disclosed in this article widest Range.
It the above is only the preferred embodiment of the present invention, it is noted that those skilled in the art are come It says, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications also should be regarded as Protection scope of the present invention.

Claims (13)

1. a kind of video cover generation method, which is characterized in that including:
Obtain the multiple image in video;
For every frame described image the difficulty of described image is determined according to the characteristics of image for reflecting degree with deep impression in described image Degree of forgetting scores, and the unforgettable degree scoring is for reflecting interest level of the user to image;
Unforgettable degree scoring based on the multiple image selects an at least frame for generating video envelope from the multiple image The target image in face;
Based on target image described in an at least frame, the video cover of the video is generated.
2. video cover generation method according to claim 1, which is characterized in that described according to reflection print in described image As the characteristics of image spent deeply, the unforgettable degree scoring of described image is determined, including:
The unforgettable degree model of image obtained using advance training calculates the unforgettable degree scoring of described image, the unforgettable degree of described image Model is to be trained using several sample images for being labeled with unforgettable degree scoring.
3. video cover generation method according to claim 1, which is characterized in that the multiframe figure obtained in video Picture, including:
Obtain the video of video cover to be generated;
The video is split as continuous multiple video-frequency bands, each video-frequency band includes an at least frame image;
An at least frame image is selected from each video-frequency band as candidate cover, obtains the multiframe figure as candidate cover Picture.
4. video cover generation method according to claim 3, which is characterized in that described to be selected from each video-frequency band An at least frame image is taken out as candidate cover, including:
Calculate separately the clarity of each frame image in each video-frequency band;
An at least frame clarity is selected from each video-frequency band meets the image of preset condition as candidate cover.
5. video cover generation method according to any one of claims 1 to 4, which is characterized in that described based on described more The unforgettable degree of frame image scores, and target image of at least frame for generating video cover is selected from the multiple image, Including:
From the multiple image, the highest target image of unforgettable degree scoring is selected;
It is described that the video cover of the video is generated based on target image described in an at least frame, including:
The static video cover of the video is generated using the target image.
6. video cover generation method according to any one of claims 1 to 4, which is characterized in that be based on the multiframe figure The unforgettable degree of picture scores, and target image of at least frame for generating video cover is selected from the multiple image, including:
From the multiple image, the highest benchmark image of unforgettable degree scoring is selected;
In video-frequency band belonging to the benchmark image, select comprising the continuous multiple frames image conduct including the benchmark image Target image for generating dynamic video cover.
7. video cover generation method according to claim 2, which is characterized in that the unforgettable degree model of described image passes through such as Under type trains to obtain:
Several sample images are obtained, every width sample image is labeled with a unforgettable degree scoring;
Several sample images are input to deep learning network to be trained, obtain every width that the deep learning neural network forecast goes out The unforgettable degree of the sample image scores;
The scoring of unforgettable degree and the deep learning network respectively marked based on several described sample images exports described more The unforgettable degree of width image scores, and determines the accuracy of the unforgettable degree scoring of the deep learning neural network forecast image;
When the accuracy is unsatisfactory for preset requirement, then the parameter value of parameter in the deep learning network is adjusted, and returned The operation that several sample images are input to deep learning network to be trained is executed, is preset until the accuracy meets It is required that.
8. a kind of video cover generating means, which is characterized in that including:
Video acquisition unit, for obtaining the multiple image in video;
Image scoring unit, for for per frame described image, the characteristics of image of degree with deep impression to be reflected in foundation described image, Determine the unforgettable degree scoring of described image, the unforgettable degree scoring is for reflecting interest level of the user to image;
Optical sieving unit selects at least for the unforgettable degree scoring based on the multiple image from the multiple image One frame is used to generate the target image of video cover;
Cover generation unit, for based on target image described in an at least frame, generating the video cover of the video.
9. video cover generating means according to claim 8, which is characterized in that described image scoring unit, including:
Image scoring subelement, for for per frame described image, utilizing the advance unforgettable degree model of image trained and obtained, calculating The unforgettable degree of described image scores, and the unforgettable degree model of described image is to be instructed using several sample images for being labeled with unforgettable degree scoring It gets.
10. video cover generating means according to claim 8, which is characterized in that the video acquisition unit, including:
Video acquisition subelement, the video for obtaining video cover to be generated;
Video splits subelement, and for the video to be split as continuous multiple video-frequency bands, each video-frequency band includes at least One frame image;
Image candidate subelement is obtained for selecting an at least frame image from each video-frequency band as candidate cover Multiple image as candidate cover.
11. video cover generating means according to claim 10, which is characterized in that described image candidate's subelement, packet It includes:
Sharpness computation subelement, the clarity for calculating separately each frame image in each video-frequency band;
First candidate subelement, the figure of preset condition is met for selecting an at least frame clarity from each video-frequency band As candidate cover.
12. a kind of computer equipment, which is characterized in that including:
Processor and memory;
Wherein, the processor is for executing the program stored in the memory;
For storing program, described program is at least used for the memory:
Obtain the multiple image in video;
For every frame described image the difficulty of described image is determined according to the characteristics of image for reflecting degree with deep impression in described image Degree of forgetting scores, and the unforgettable degree scoring is for reflecting interest level of the user to image;
Unforgettable degree scoring based on the multiple image selects an at least frame for generating video envelope from the multiple image The target image in face;
Based on target image described in an at least frame, the video cover of the video is generated.
13. a kind of storage medium, which is characterized in that be stored with computer executable instructions, the calculating in the storage medium When machine executable instruction is loaded and executed by processor, realize that claim 1 to 7 any one of them video cover as above generates Method.
CN201810504021.5A 2018-05-23 2018-05-23 Video cover generation method and device, computer equipment and storage medium Active CN108650524B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810504021.5A CN108650524B (en) 2018-05-23 2018-05-23 Video cover generation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810504021.5A CN108650524B (en) 2018-05-23 2018-05-23 Video cover generation method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108650524A true CN108650524A (en) 2018-10-12
CN108650524B CN108650524B (en) 2022-08-16

Family

ID=63757992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810504021.5A Active CN108650524B (en) 2018-05-23 2018-05-23 Video cover generation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108650524B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069664A (en) * 2019-04-24 2019-07-30 北京博视未来科技有限公司 Cartoon surface plot extracting method and its system
CN110191357A (en) * 2019-06-28 2019-08-30 北京奇艺世纪科技有限公司 The excellent degree assessment of video clip, dynamic seal face generate method and device
CN110381339A (en) * 2019-08-07 2019-10-25 腾讯科技(深圳)有限公司 Picture transmission method and device
CN110390025A (en) * 2019-07-24 2019-10-29 百度在线网络技术(北京)有限公司 Cover figure determines method, apparatus, equipment and computer readable storage medium
CN110572711A (en) * 2019-09-27 2019-12-13 北京达佳互联信息技术有限公司 Video cover generation method and device, computer equipment and storage medium
CN110633377A (en) * 2019-09-23 2019-12-31 三星电子(中国)研发中心 Picture cleaning method and device
CN110856037A (en) * 2019-11-22 2020-02-28 北京金山云网络技术有限公司 Video cover determination method and device, electronic equipment and readable storage medium
CN110879851A (en) * 2019-10-15 2020-03-13 北京三快在线科技有限公司 Video dynamic cover generation method and device, electronic equipment and readable storage medium
CN111062314A (en) * 2019-12-13 2020-04-24 腾讯科技(深圳)有限公司 Image selection method and device, computer readable storage medium and electronic equipment
CN111143613A (en) * 2019-12-30 2020-05-12 携程计算机技术(上海)有限公司 Method, system, electronic device and storage medium for selecting video cover
CN111182295A (en) * 2020-01-06 2020-05-19 腾讯科技(深圳)有限公司 Video data processing method, device, equipment and readable storage medium
CN111327819A (en) * 2020-02-14 2020-06-23 北京大米未来科技有限公司 Method, device, electronic equipment and medium for selecting image
CN111369434A (en) * 2020-02-13 2020-07-03 广州酷狗计算机科技有限公司 Method, device and equipment for generating cover of spliced video and storage medium
CN111491202A (en) * 2019-01-29 2020-08-04 广州市百果园信息技术有限公司 Video publishing method, device, equipment and storage medium
CN111918130A (en) * 2020-08-11 2020-11-10 北京达佳互联信息技术有限公司 Video cover determining method and device, electronic equipment and storage medium
CN111984821A (en) * 2020-06-22 2020-11-24 汉海信息技术(上海)有限公司 Method and device for determining dynamic cover of video, storage medium and electronic equipment
CN112383830A (en) * 2020-11-06 2021-02-19 北京小米移动软件有限公司 Video cover determining method and device and storage medium
CN112598453A (en) * 2020-12-29 2021-04-02 上海硬通网络科技有限公司 Advertisement putting method and device and electronic equipment
CN112749298A (en) * 2020-04-08 2021-05-04 腾讯科技(深圳)有限公司 Video cover determining method and device, electronic equipment and computer storage medium
CN113301395A (en) * 2021-04-30 2021-08-24 当趣网络科技(杭州)有限公司 Voice searching method combining user grades in video playing state
CN113641853A (en) * 2021-08-23 2021-11-12 北京字跳网络技术有限公司 Dynamic cover generation method, device, electronic equipment, medium and program product
CN113656642A (en) * 2021-08-20 2021-11-16 北京百度网讯科技有限公司 Cover image generation method, device, equipment, storage medium and program product
CN113727200A (en) * 2021-08-27 2021-11-30 游艺星际(北京)科技有限公司 Video abstract information determination method and device, electronic equipment and storage medium
WO2022188563A1 (en) * 2021-03-10 2022-09-15 上海哔哩哔哩科技有限公司 Dynamic cover setting method and system
US20220312077A1 (en) * 2020-01-21 2022-09-29 Beijing Dajia Internet Information Technology Co., Ltd. Video recommendation method and apparatus
CN116311533A (en) * 2023-05-11 2023-06-23 广东中科凯泽信息科技有限公司 Sports space highlight moment image acquisition method based on AI intelligence

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807198A (en) * 2010-01-08 2010-08-18 中国科学院软件研究所 Video abstraction generating method based on sketch
CN101853286A (en) * 2010-05-20 2010-10-06 上海全土豆网络科技有限公司 Intelligent selection method of video thumbnails
CN103621106A (en) * 2011-06-20 2014-03-05 微软公司 Providing video presentation commentary
US8867891B2 (en) * 2011-10-10 2014-10-21 Intellectual Ventures Fund 83 Llc Video concept classification using audio-visual grouplets
CN104244024A (en) * 2014-09-26 2014-12-24 北京金山安全软件有限公司 Video cover generation method and device and terminal
CN104657468A (en) * 2015-02-12 2015-05-27 中国科学院自动化研究所 Fast video classification method based on images and texts
CN104850434A (en) * 2015-04-30 2015-08-19 腾讯科技(深圳)有限公司 Method and apparatus for downloading multimedia resources
US20160188997A1 (en) * 2014-12-29 2016-06-30 Neon Labs Inc. Selecting a High Valence Representative Image
CN105900088A (en) * 2013-12-03 2016-08-24 谷歌公司 Dynamic thumbnail representation for a video playlist
CN106021485A (en) * 2016-05-19 2016-10-12 中国传媒大学 Multi-element attribute movie data visualization system
CN106503693A (en) * 2016-11-28 2017-03-15 北京字节跳动科技有限公司 The offer method and device of video front cover
CN106792085A (en) * 2016-12-09 2017-05-31 广州华多网络科技有限公司 A kind of method and apparatus for generating video cover image
CN107093164A (en) * 2017-04-26 2017-08-25 北京百度网讯科技有限公司 Method and apparatus for generating image
CN107239203A (en) * 2016-03-29 2017-10-10 北京三星通信技术研究有限公司 A kind of image management method and device
CN107657468A (en) * 2016-07-25 2018-02-02 北京金山云网络技术有限公司 Material evaluating method and device
CN107707967A (en) * 2017-09-30 2018-02-16 咪咕视讯科技有限公司 The determination method, apparatus and computer-readable recording medium of a kind of video file front cover
CN107832725A (en) * 2017-11-17 2018-03-23 北京奇虎科技有限公司 Video front cover extracting method and device based on evaluation index
CN107918656A (en) * 2017-11-17 2018-04-17 北京奇虎科技有限公司 Video front cover extracting method and device based on video title
CN107958030A (en) * 2017-11-17 2018-04-24 北京奇虎科技有限公司 Video front cover recommended models optimization method and device

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807198A (en) * 2010-01-08 2010-08-18 中国科学院软件研究所 Video abstraction generating method based on sketch
CN101853286A (en) * 2010-05-20 2010-10-06 上海全土豆网络科技有限公司 Intelligent selection method of video thumbnails
CN103621106A (en) * 2011-06-20 2014-03-05 微软公司 Providing video presentation commentary
US8867891B2 (en) * 2011-10-10 2014-10-21 Intellectual Ventures Fund 83 Llc Video concept classification using audio-visual grouplets
CN105900088A (en) * 2013-12-03 2016-08-24 谷歌公司 Dynamic thumbnail representation for a video playlist
CN104244024A (en) * 2014-09-26 2014-12-24 北京金山安全软件有限公司 Video cover generation method and device and terminal
US20160188997A1 (en) * 2014-12-29 2016-06-30 Neon Labs Inc. Selecting a High Valence Representative Image
CN104657468A (en) * 2015-02-12 2015-05-27 中国科学院自动化研究所 Fast video classification method based on images and texts
CN104850434A (en) * 2015-04-30 2015-08-19 腾讯科技(深圳)有限公司 Method and apparatus for downloading multimedia resources
CN107239203A (en) * 2016-03-29 2017-10-10 北京三星通信技术研究有限公司 A kind of image management method and device
CN106021485A (en) * 2016-05-19 2016-10-12 中国传媒大学 Multi-element attribute movie data visualization system
CN107657468A (en) * 2016-07-25 2018-02-02 北京金山云网络技术有限公司 Material evaluating method and device
CN106503693A (en) * 2016-11-28 2017-03-15 北京字节跳动科技有限公司 The offer method and device of video front cover
CN106792085A (en) * 2016-12-09 2017-05-31 广州华多网络科技有限公司 A kind of method and apparatus for generating video cover image
CN107093164A (en) * 2017-04-26 2017-08-25 北京百度网讯科技有限公司 Method and apparatus for generating image
CN107707967A (en) * 2017-09-30 2018-02-16 咪咕视讯科技有限公司 The determination method, apparatus and computer-readable recording medium of a kind of video file front cover
CN107832725A (en) * 2017-11-17 2018-03-23 北京奇虎科技有限公司 Video front cover extracting method and device based on evaluation index
CN107918656A (en) * 2017-11-17 2018-04-17 北京奇虎科技有限公司 Video front cover extracting method and device based on video title
CN107958030A (en) * 2017-11-17 2018-04-24 北京奇虎科技有限公司 Video front cover recommended models optimization method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
兰怡洁: "基于情感的视频摘要研究", 《北京交通大学》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111491202A (en) * 2019-01-29 2020-08-04 广州市百果园信息技术有限公司 Video publishing method, device, equipment and storage medium
CN110069664B (en) * 2019-04-24 2021-04-06 北京博视未来科技有限公司 Method and system for extracting cover picture of cartoon work
CN110069664A (en) * 2019-04-24 2019-07-30 北京博视未来科技有限公司 Cartoon surface plot extracting method and its system
CN110191357A (en) * 2019-06-28 2019-08-30 北京奇艺世纪科技有限公司 The excellent degree assessment of video clip, dynamic seal face generate method and device
CN110390025A (en) * 2019-07-24 2019-10-29 百度在线网络技术(北京)有限公司 Cover figure determines method, apparatus, equipment and computer readable storage medium
CN110381339A (en) * 2019-08-07 2019-10-25 腾讯科技(深圳)有限公司 Picture transmission method and device
CN110633377A (en) * 2019-09-23 2019-12-31 三星电子(中国)研发中心 Picture cleaning method and device
CN110572711A (en) * 2019-09-27 2019-12-13 北京达佳互联信息技术有限公司 Video cover generation method and device, computer equipment and storage medium
CN110879851A (en) * 2019-10-15 2020-03-13 北京三快在线科技有限公司 Video dynamic cover generation method and device, electronic equipment and readable storage medium
CN110856037A (en) * 2019-11-22 2020-02-28 北京金山云网络技术有限公司 Video cover determination method and device, electronic equipment and readable storage medium
CN111062314A (en) * 2019-12-13 2020-04-24 腾讯科技(深圳)有限公司 Image selection method and device, computer readable storage medium and electronic equipment
CN111143613B (en) * 2019-12-30 2024-02-06 携程计算机技术(上海)有限公司 Method, system, electronic device and storage medium for selecting video cover
CN111143613A (en) * 2019-12-30 2020-05-12 携程计算机技术(上海)有限公司 Method, system, electronic device and storage medium for selecting video cover
CN111182295A (en) * 2020-01-06 2020-05-19 腾讯科技(深圳)有限公司 Video data processing method, device, equipment and readable storage medium
CN111182295B (en) * 2020-01-06 2023-08-25 腾讯科技(深圳)有限公司 Video data processing method, device, equipment and readable storage medium
US20220312077A1 (en) * 2020-01-21 2022-09-29 Beijing Dajia Internet Information Technology Co., Ltd. Video recommendation method and apparatus
US11546663B2 (en) * 2020-01-21 2023-01-03 Beijing Dajia Internet Information Technology Co., Ltd. Video recommendation method and apparatus
CN111369434A (en) * 2020-02-13 2020-07-03 广州酷狗计算机科技有限公司 Method, device and equipment for generating cover of spliced video and storage medium
CN111327819A (en) * 2020-02-14 2020-06-23 北京大米未来科技有限公司 Method, device, electronic equipment and medium for selecting image
CN112749298B (en) * 2020-04-08 2024-02-09 腾讯科技(深圳)有限公司 Video cover determining method and device, electronic equipment and computer storage medium
CN112749298A (en) * 2020-04-08 2021-05-04 腾讯科技(深圳)有限公司 Video cover determining method and device, electronic equipment and computer storage medium
CN111984821A (en) * 2020-06-22 2020-11-24 汉海信息技术(上海)有限公司 Method and device for determining dynamic cover of video, storage medium and electronic equipment
CN111918130A (en) * 2020-08-11 2020-11-10 北京达佳互联信息技术有限公司 Video cover determining method and device, electronic equipment and storage medium
CN112383830A (en) * 2020-11-06 2021-02-19 北京小米移动软件有限公司 Video cover determining method and device and storage medium
CN112598453A (en) * 2020-12-29 2021-04-02 上海硬通网络科技有限公司 Advertisement putting method and device and electronic equipment
WO2022188563A1 (en) * 2021-03-10 2022-09-15 上海哔哩哔哩科技有限公司 Dynamic cover setting method and system
CN113301395B (en) * 2021-04-30 2023-07-07 当趣网络科技(杭州)有限公司 Voice searching method combined with user grade in video playing state
CN113301395A (en) * 2021-04-30 2021-08-24 当趣网络科技(杭州)有限公司 Voice searching method combining user grades in video playing state
CN113656642A (en) * 2021-08-20 2021-11-16 北京百度网讯科技有限公司 Cover image generation method, device, equipment, storage medium and program product
CN113641853A (en) * 2021-08-23 2021-11-12 北京字跳网络技术有限公司 Dynamic cover generation method, device, electronic equipment, medium and program product
CN113727200A (en) * 2021-08-27 2021-11-30 游艺星际(北京)科技有限公司 Video abstract information determination method and device, electronic equipment and storage medium
CN116311533A (en) * 2023-05-11 2023-06-23 广东中科凯泽信息科技有限公司 Sports space highlight moment image acquisition method based on AI intelligence
CN116311533B (en) * 2023-05-11 2023-10-03 广东中科凯泽信息科技有限公司 Sports space highlight moment image acquisition method based on AI intelligence

Also Published As

Publication number Publication date
CN108650524B (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN108650524A (en) Video cover generation method, device, computer equipment and storage medium
CN103718166B (en) Messaging device, information processing method
US20170065888A1 (en) Identifying And Extracting Video Game Highlights
CN104486649B (en) Video content ranking method and device
CN109756746A (en) Video reviewing method, device, server and storage medium
CN103024471B (en) A kind of rapid recommendation method for intelligent cloud television
CN110502665B (en) Video processing method and device
US11694444B2 (en) Setting ad breakpoints in a video within a messaging system
CN108600083B (en) Message reminding method and device
US11792491B2 (en) Inserting ads into a video within a messaging system
JP2020513705A (en) Method, system and medium for detecting stereoscopic video by generating fingerprints of portions of a video frame
CN116325765A (en) Selecting advertisements for video within a messaging system
CN115362474A (en) Scoods and hairstyles in modifiable video for custom multimedia messaging applications
CN111581521A (en) Group member recommendation method, device, server, storage medium and system
US11057332B2 (en) Augmented expression sticker control and management
KR102547942B1 (en) Method and apparatus for providing video special effects for producing short-form videos in a video commerce system
CN113038185B (en) Bullet screen processing method and device
CN105975494A (en) Service information pushing method and apparatus
CN113535991B (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
CN110415318B (en) Image processing method and device
CN108521855B (en) Interactive method, interactive apparatus, electronic apparatus, and computer-readable storage medium
CN110381339B (en) Picture transmission method and device
US10643251B1 (en) Platform for locating and engaging content generators
Le Moan et al. Towards exploiting change blindness for image processing
CN113128261A (en) Data processing method and device and video processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant