CN110324706A - A kind of generation method, device and the computer storage medium of video cover - Google Patents
A kind of generation method, device and the computer storage medium of video cover Download PDFInfo
- Publication number
- CN110324706A CN110324706A CN201810286238.3A CN201810286238A CN110324706A CN 110324706 A CN110324706 A CN 110324706A CN 201810286238 A CN201810286238 A CN 201810286238A CN 110324706 A CN110324706 A CN 110324706A
- Authority
- CN
- China
- Prior art keywords
- video
- data
- frame
- decoded
- pending data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000003860 storage Methods 0.000 title claims abstract description 47
- 238000012545 processing Methods 0.000 claims abstract description 93
- 239000000284 extract Substances 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 238000004590 computer program Methods 0.000 claims description 21
- 230000015654 memory Effects 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 3
- 230000001755 vocal effect Effects 0.000 claims description 2
- 230000000295 complement effect Effects 0.000 claims 1
- 230000015572 biosynthetic process Effects 0.000 abstract description 4
- 239000013598 vector Substances 0.000 description 24
- 230000008569 process Effects 0.000 description 14
- 230000006870 function Effects 0.000 description 9
- 230000006872 improvement Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000008921 facial expression Effects 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 241001269238 Data Species 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8352—Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Television Signal Processing For Recording (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application embodiment discloses generation method, device and the computer storage medium of a kind of video cover, wherein the described method includes: obtaining pending data, the pending data includes image data or video data;It identifies the type of the pending data, if the pending data is video data, the pending data is decoded, and extract video frame from decoded video data;The video frame of extraction is sent in processing queue, to generate video cover based on the video frame in the processing queue;If the pending data is image data, the pending data is decoded, and decoded image is sent in the processing queue, to generate video cover based on the described image in the processing queue.Technical solution provided by the present application can be improved the formation efficiency of video cover.
Description
Technical field
This application involves Internet technical field, in particular to a kind of generation method, device and the computer of video cover
Storage medium.
Background technique
With the continuous development of Internet technology, the quantity of video is also increasing in video playing platform.Currently, being
Allow user that can quickly recognize the theme of video content, it will usually generate corresponding video cover for video.In order to save
Manpower and material resources spent by manually generated video cover would generally automatically generate video envelope using the technology of image procossing at present
Face.
It currently usually can be by supporting the cover generating device of image identification function to automatically generate video cover.Specifically
Ground, cover generating device can the standard based on OpenGL the picture frame in video is analyzed, to generate video cover.
Since OpenGL is typically only capable to handle image, the usual of cover generating device is inputted in the prior art
It is all image.If to identify video, need first to pre-process video data by other equipment.Tool
Body, referring to Fig. 1, in the prior art independent two can be used when generating video cover based on video data
Equipment is handled.Wherein, pre-processing device can be decoded video data, then take out from the video frame that decoding obtains again
Take a certain number of picture frames.For the ease of storing to picture frame, pre-processing device would generally will extract obtained image
Frame is encoded, to obtain the image of the formats such as jpeg, bmp, png.These images can be added by cover generating device
It carries, in cover generating device, needs to be decoded the image of load, then handled again for decoded image,
To generate video cover.
Therefore currently when generating video cover, cover generating device is typically only capable at the image to input
Reason.If currently there was only video data, need to handle respectively by independent pre-processing device and cover generating device, ability
Generate final video cover.Such generating mode, the efficiency that will lead to generation video cover are lower.
Summary of the invention
Generation method, device and the computer storage that the purpose of the application embodiment is to provide a kind of video cover are situated between
Matter can be improved the formation efficiency of video cover.
To achieve the above object, the application embodiment provides a kind of generation method of video cover, which comprises
Pending data is obtained, the pending data includes image data or video data;Identify the class of the pending data
Type is decoded the pending data, and from decoded video data if the pending data is video data
Extract video frame;The video frame of extraction is sent in processing queue, based on the video in the processing queue
Frame generates video cover;If the pending data is image data, the pending data is decoded, and will be after decoding
Image be sent in the processing queue, with based on it is described processing queue in described image generate video cover.
To achieve the above object, the application embodiment also provides a kind of generating means of video cover, described device packet
Memory and processor are included, computer program is stored in the memory, when the computer program is executed by the processor,
Acquisition pending data is performed the steps of, the pending data includes image data or video data;Identification it is described to
The type of data is handled, if the pending data is video data, the pending data is decoded, and after decoding
Video data in extract video frame;The video frame of extraction is sent in processing queue, to be based on the processing queue
In the video frame generate video cover;If the pending data is image data, the pending data is solved
Code, and decoded image is sent in the processing queue, to generate view based on the described image in the processing queue
Frequency cover.
To achieve the above object, the application embodiment also provides a kind of computer storage medium, the computer storage
Computer program is stored in medium, when the computer program is executed by processor, is performed the steps of and is obtained number to be processed
According to the pending data includes image data or video data;The type of the pending data is identified, if described wait locate
Reason data are video data, are decoded to the pending data, and extract video frame from decoded video data;It will
The video frame extracted is sent in processing queue, to generate video envelope based on the video frame in the processing queue
Face;If the pending data is image data, the pending data is decoded, and decoded image is sent to
In the processing queue, to generate video cover based on the described image in the processing queue.
Therefore the generating means of video cover provided by the present application, it is raw that video cover in the prior art can be extended
The function of forming apparatus can both be handled the image data of input, can also be handled the video data of input.Its
In, portion, can identify the data type of input in the device, can when identifying current data is video data
To be decoded to the video data, to obtain each video frame for including in video data.It is then possible to be obtained from decoding
Video frame in extract a certain number of video frames, these extract video frames can be sent directly to processing queue in, and
The process of progress image coding is not needed, handling the video frame in queue can be used for the production of subsequent video cover.By upper
As it can be seen that compared with the prior art, on the one hand technical solution provided by the present application extends the type of pending data, another party
Face after extracting video frame in decoded video data, is not needing to encode the video frame of extraction, but
It can be sent directly into processing queue and be handled, coding and the subsequent view to after coding are carried out to video frame to save
The process that frequency frame is decoded improves the formation efficiency of video cover while simplifying video data treatment process.
Detailed description of the invention
It, below will be to embodiment in order to illustrate more clearly of the application embodiment or technical solution in the prior art
Or attached drawing needed to be used in the description of the prior art is briefly described, it should be apparent that, the accompanying drawings in the following description is only
It is some embodiments as described in this application, for those of ordinary skill in the art, in not making the creative labor property
Under the premise of, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the schematic diagram of the application video cover generating process in the prior art;
Fig. 2 is the generation method flow chart of video cover in the application embodiment;
Fig. 3 is the schematic diagram of index list in the application embodiment;
Fig. 4 is the processing schematic of multithreading in the application embodiment;
Fig. 5 is the processing schematic of pending data in the application embodiment;
Fig. 6 is the schematic diagram being decoded in the application embodiment by CPU and GPU;
Fig. 7 is the structural schematic diagram of the generating means of video cover in the application embodiment.
Specific embodiment
In order to make those skilled in the art better understand the technical solutions in the application, below in conjunction with the application reality
The attached drawing in mode is applied, the technical solution in the application embodiment is clearly and completely described, it is clear that described
Embodiment is only a part of embodiment of the application, rather than whole embodiments.Based on the embodiment party in the application
Formula, every other embodiment obtained by those of ordinary skill in the art without making creative efforts, is all answered
When the range for belonging to the application protection.
The application provides a kind of generation method of video cover, and the method can be applied to the business of video playback website
In server.The video can be generated after receiving the video that user or administrator upload in the service server
Video cover.
Referring to Fig. 2, the generation method of video cover provided by the present application, may comprise steps of.
S1: pending data is obtained, the pending data includes image data or video data.
In the present embodiment, available related to the video when needing to generate video cover for some video
Data, the data of acquisition can be as above-mentioned pending data.The pending data can be video data,
It can be image data.Specifically, when needing to generate video cover for a video, the data of the video can be carried out
Pretreatment early period, to obtain to characterize a series of images of the content of the video.The pretreated process can be pair
The data of the video are decoded, and extract a certain number of video frames from the video data that decoding obtains, and then will be extracted
Obtained video frame is converted to the image for having certain coded format by image coding mode.In this way, the image after coding is just
It can be used as the pending data.Further, it is also possible to directly defeated using the video data of the video as the pending data
In the generating means for entering video cover, to handle in the generating means the video data.
In the present embodiment, the pending data in the generating means of input video cover can be generating means master
Dynamic load, it is also possible to passive received.Wherein, the pending data, which can be, is sent to the view by another equipment
In the generating means of frequency cover, in this way, the generating means of the video cover can receive the pending data.In addition,
The pending data may be stored in Resource Server, and it is to be processed that the generating means of the video cover can have this
The storage address of data, by accessing the storage address, the generating means of the video cover can be to the Resource Server
Data download request is initiated, to download the pending data.
In one embodiment, in order to improve the download efficiencies of data, pending data can be divided into multiple numbers
According to block, it is stored in the Resource Server.In the Resource Server, multiple data blocks can also be stored and are divided into
Pending data associated index list can serve to indicate that the storage location of each data block in the index list.Specifically
Ground may include the storage mark of each data block in the pending data in the index list.For example, referring to Fig. 3,
The index list can show as an array, may include two column datas in the array, and a column data is depositing for data block
Storage mark, another column data can be the title of data block.The form of the storage mark can depend on the storage of data block
Mode.Specifically, if each data block is provided with respective storage address, then the storage mark of data block can be the number
According to the storage address of block.For example, the storage address can be directed toward data block URL (Uniform Resource Locator,
Uniform resource locator).If each data block is located under the same storage address, only each data block has respective deposit
Storage number, then the storage mark of the data block can be the storage number of the data block.Certainly, in practical applications,
A part of data block in the pending data can store under a storage address, and another part data block can deposit
Be stored under another storage address, in this way, the data block storage mark can be the data block storage address with deposit
Store up the combination of number.
In the present embodiment, referring to Fig. 4, the generating means needs of video cover are downloaded from Resource Server wait locate
, can be in such a way that multi-threaded parallel be downloaded when managing data, while multiple data blocks are downloaded, to improve under data
The efficiency of load.Specifically, the generating means of video cover can download the pending data from the Resource Server
Index list, and parse the content of the index list.In the index list, the storage mark of each data block can be indicated,
The data volume of each data block and the total amount of data of the pending data can also be indicated.In practical applications, if it is described to
The total amount of data for handling data is smaller, is somebody's turn to do then the generating means of video cover only can be downloaded successively by a thread wait locate
Manage each data block in data.And if the total amount of data of the pending data is larger, it can open up at least two
Lineation journey carrys out each data block in pending data described in Parallel download.Specifically, the generating means of video cover can incite somebody to action
The each storage mark distribution indicated in the index list is into the multiple processing threads opened up.Each processing thread can
Respective downloading task is established, may include the storage mark of data block to be downloaded in the downloading task.In this way, by described
At least two processing threads, the data block that can be directed toward with storage mark to be processed described in Parallel download, to improve to be processed
The acquisition efficiency of data.
S3: identifying the type of the pending data, if the pending data is video data, to the number to be processed
According to being decoded, and video frame is extracted from decoded video data;The video frame of extraction is sent to processing queue
In, to generate video cover based on the video frame in the processing queue.
In the present embodiment, the generating means of video cover can identify described after getting pending data
The type of pending data, to take different processing modes according to different data types.Specifically, video data and
Image data is generally configured with different suffix, described wait locate so as to identify by identifying the title suffix of pending data
Manage the type of data.For example, the data that suffix is avi, MP4 can be video data;Suffix is that the data of jpg, png can be
Image data.
As shown in figure 5, in one embodiment, it, equally can be first to institute if the pending data is video data
Video data is stated to be decoded.The video data can also use corresponding decoding process according to the difference of coding mode
It is decoded.For example, current common code encoding/decoding mode may include H.261, H.263, H.264, MPEG etc..After decoding
Video data in comprising considerable video frame if handled all video frames can consume sizable calculating
Resource can also reduce the efficiency for generating video cover.It therefore, in the present embodiment, can be from decoded video data
A certain number of video frames are extracted, it is subsequent to be handled for these video frames extracted.
In the present embodiment, the quantity that can preassign the video frame of extraction, then takes out from video data at random
Take the video frame of these quantity.Further, it is also possible to successively be mentioned from the decoded video data according to appointed interval frame number
Take video frame.For example, the appointed interval frame number is 200 frames, then can extract a video frame every 200 frames.
It in one embodiment, can in order to enable the video frame extracted can more fully cover the content of video
To determine scene switching frame in the decoded video data, and using the scene switching frame as from described decoded
The video frame extracted in video data.The scene switching frame can be used as between two different scenes adjacent in video
Video frame.The corresponding scene switching frame of each scene in order to obtain video data, can lead in the present embodiment
The mode compared frame by frame is crossed to extract.Specifically, reference frame can be determined in the video data first, and is successively calculated
The similarity between each video frame and the reference frame after the reference frame.
In the present embodiment, the frame picture that the reference frame can be randomly assigned in a certain range.For example, described
Reference frame can be the frame picture randomly selected in introductory song 2 minutes of the video data.Certainly, described in order not to omit
Scene in video data, can be using the first frame of the video data as the reference frame.
It in the present embodiment, can be since the reference frame, by the benchmark after reference frame has been determined
Each frame picture after frame is successively compared with the reference frame, between each frame picture of calculated for subsequent and the reference frame
Similarity.Specifically, when calculating the similarity between each video frame and the reference frame, the base can be extracted respectively
The first eigenvector and second feature vector of quasi- frame and current video frame.
In the present embodiment, the first eigenvector and the second feature vector can have diversified forms.Its
In, the feature vector of the frame picture can be constructed based on the pixel value of pixel in every frame picture.Every frame picture be usually all by
Made of several pixels arranges in a certain order, pixel corresponds to respective pixel value, so as to constitute color
Bright-colored picture.The pixel value can be the numerical value in specified section.For example, the pixel value can be gray value,
The gray value can be any one numerical value in 0 to 255, and the size of numerical value can indicate the depth of gray scale.Certainly, described
Pixel value can also be multiple respective numerical value of colour system component in other colour system spaces.For example, RGB (Red, Green, Blue,
RGB) in colour system space, the pixel value may include R component numerical value, G component values and B component numerical value.
In the present embodiment, in available every frame picture each pixel pixel value, and by obtain pixel
Value constitutes the feature vector of the frame picture.For example, for the current video frame for having 9*9=81 pixel, Ke Yiyi
The secondary pixel value for obtaining wherein pixel, then according to sequence from left to right from top to bottom, the pixel value that will acquire successively is arranged
Column, to constitute the vector of 81 dimensions.The vector of 81 dimension can be as the feature vector of the current video frame.
In the present embodiment, described eigenvector can also be CNN (the Convolutional Neural of every frame picture
Network, convolutional neural networks) feature.It specifically, can be by each frame picture after the reference frame and the reference frame
It inputs in convolutional neural networks, then the convolutional neural networks can export the reference frame and other each frame pictures are corresponding
Feature vector.
In the present embodiment, in order to accurately characterize and shown in the reference frame and current video frame
Hold, the first eigenvector and the second feature vector can also respectively indicate the reference frame and the current video frame
Scale invariant feature.In this way, first extracted is special even if changing rotation angle, brightness of image or the shooting visual angle of image
Second feature vector described in sign vector sum still is able to embody the content in the reference frame and current video frame well.Specifically
Ground, the first eigenvector and the second feature vector can be Sift (Scale-invariant feature
Transform, scale invariant feature conversion) feature, surf feature (Speed Up Robust Feature, fast robust spy
Sign) or color histogram feature etc..
In the present embodiment, after the first eigenvector and the second feature vector has been determined, Ke Yiji
Calculate the similarity between the first eigenvector and the second feature vector.Specifically, the similarity is in vector space
In can be expressed as the distance between two vectors.Distance is closer, indicates that two vectors are more similar, therefore similarity is higher.Away from
From remoter, indicate that two vector difference are bigger, therefore similarity is lower.Therefore, it is calculating the reference frame and described is working as forward sight
When similarity between frequency frame, the space length between the first eigenvector and the second feature vector can be calculated,
And using the inverse of the space length as the similarity between the reference frame and the current video frame.In this way, space away from
From smaller, corresponding similarity is bigger, shows more similar between the reference frame and the current video frame.On the contrary, empty
Between distance it is bigger, corresponding similarity is smaller, shows more dissimilar between the reference frame and the current video frame.
In the present embodiment, each video frame after the reference frame and institute can be successively calculated in the manner described above
State the similarity between reference frame.The content shown in the higher two frames picture of similarity is also usually more similar, is
Determine the different scenes in video data, in the present embodiment, phase between the reference frame and current video frame
When being less than or equal to specified threshold like degree, the current video frame can be determined as to a scene switching frame.Wherein, described
Specified threshold can be a preset numerical value, which can neatly be adjusted according to the actual situation.For example, working as
When the quantity of the scene switching frame filtered out according to the specified threshold is excessive, it can suitably reduce the size of the specified threshold.Again
For example, can suitably increase the specified threshold when the quantity of the scene switching frame filtered out according to the specified threshold is very few
Size.In the present embodiment, similarity is less than or equal to specified threshold, can indicate that the content in two frame pictures has had
It is standby apparent different, it can be considered that the scene that current video frame is shown, occurs with the scene that the reference frame is shown
Change.At this point, the current video frame can be retained as a frame picture of scene switching.
In the present embodiment, when the current video frame is determined as a scene switching frame, can continue to determine
Subsequent other scene switching frames.Specifically, from the reference frame to the current video frame, scene can be considered as and have occurred one
Secondary change, therefore current scene is the content that the current video frame is shown.It, can be by the current video based on this
Frame successively calculates each video frame after the new reference frame and between the new reference frame as new reference frame
Similarity, to determine next scene switching frame according to the similarity of calculating.Similarly, next scene is being determined
When switch frame, the phase between two frame pictures can be still determined by way of extracting feature vector and calculating space length
Like degree, and the similarity determined can still be compared with the specified threshold, so that it is determined that going out from new benchmark
Scene changed next scene switching frame again after frame.In this way, after determining each scene switching frame, these
Scene switching frame can be as the video frame extracted from decoded video data.
S5: if the pending data is image data, the pending data is decoded, and by decoded figure
As being sent in the processing queue, to generate video cover based on the described image in the processing queue.
Referring to Fig. 5, in the present embodiment, if the pending data is image data, then can be to the image
Data are decoded.Specifically, the suffix of image data shows its corresponding coded format, then to described image data
When being decoded, it can be decoded using identical codec format.After the decoding of described image data, it can restore
The original image of described image data characterization.
In the present embodiment, extracting video frame, or after decoding obtains image, can by the video frame and
Image is sent in processing queue.The processing queue can be the queue in caching or video memory, the processing queue
In the video frames/images mechanism that can follow first in, first out successively handled, to generate video cover.Specifically, exist
When generating video cover, the content shown in the corresponding each video frames/images of the same video can be integrated into a figure
Picture, to generate video cover.Carry out content integrate when, the key object in video frames/images can be extracted, then according to
Certain array format and upper lower caldding layer grade, each key object of extraction is combined, to form a width video
Cover.For example, currently total have 10 video frames, then the facial expression of personage can be extracted from this 10 video frames,
The facial expression can be as above-mentioned key object.Then the facial expression extracted can be incorporated into an image
On, to obtain final video cover.
It in one embodiment, may include character description information in the pending data obtained in step S1.It is described
Character description information can be the brief introduction of the title or video of video.The title and the brief introduction can be video production person
Or video uploader is edited in advance, can also be the staff's addition audited to video, the application compares simultaneously
Without limitation.Certainly, in practical applications, the character description information in addition to include video title and brief introduction, can also wrap
The descriptive etc. for including the word tag of video or being extracted from the barrage information of the video.
In the present embodiment, the character description information can relatively accurately show the theme of video.It therefore, can be with
The corresponding theme label of the video is extracted from the character description information.Specifically, video playback website can be for big
The character description information of the video of amount carries out induction and conclusion, and filtering out may be as each word tag of video subject, and incites somebody to action
The each word tag filtered out constitutes word tag library.Content in the word tag library can be constantly updated.This
Sample can will be in the character description information and word tag library when extracting theme label from the character description information
Each word tag matched, and will the obtained word tag of matching as the theme label of the video.For example, described
The character description information of video is " at hand, whom who goes stay to many megahero for unlimited war ", then the verbal description is believed
When breath is matched with each word tag in the word tag library, available " megahero " this matching result.
Therefore, " megahero " can be as the theme label of the video.
It in the present embodiment, can also be the video frame or image scene set label of extraction, the scene tag
It can be the word tag for characterizing the content shown in the video frame or image.For example, in some video frame
Show that two people are fighting, then the corresponding scene tag of the video frame can be " wushu ", " fight " or " function
Husband " etc..Specifically, the target object for including in the video frame or image can be identified by the technology of image recognition,
And will characterize the vocabulary of the target object perhaps phrase as the video frame or the scene tag of image.
In the present embodiment, it is contemplated that the content that video frame or image are shown all is not and the theme of video has
Have and is closely connected.In order to enable the video cover generated is capable of the theme of accurately reflecting video, it can be according to each scene
Relevance between label and the theme label filters out target frame/target image from the multiple video frames/images.
By taking video frame as an example, in the present embodiment, the relevance between scene tag and theme label can refer to scene
Similarity degree between label and theme label.Scene tag is more similar to theme label, then shows that video frame is shown interior
Hold more related to the theme of video.Specifically, it is determined that the mode of relevance may include meter between scene tag and theme label
Calculate the similarity between the scene tag and the theme label of each video frame.In practical applications, the scene mark
Label can be made of with the theme label vocabulary, when calculating the similarity between the two, can pass through term vector
(wordvector) mode respectively indicates the scene tag and the theme label.In this way, can by two words to
Space length between amount indicates the similarity between the scene tag and the theme label.Between two term vectors
Space length is closer, shows that the similarity between the scene tag and the theme label is higher;On the contrary, two term vectors
Between space length it is remoter, show that the similarity between the scene tag and the theme label is lower.In this way, in reality
It, can be by the inverse of the space length between two term vectors, as the scene tag and the theme mark in application scenarios
Similarity between label.
It in the present embodiment, can after calculating the similarity between the scene tag and the theme label
It is determined as the target frame with the video frame that the similarity calculated is more than or equal to specified similarity threshold.Wherein,
The specified similarity threshold can be used as measure between video frame and theme whether threshold associated enough, when similarity is greater than
Or when being equal to the specified similarity threshold, may indicate that and be associated with enough between current video frame and the theme of video,
The content that video frame is shown is capable of the theme of accurately reflecting video, therefore the video frame can be determined as to the target
Frame.
In addition, in practical applications, the corresponding video frame of scene tag that can also will be provided with maximum similarity is determined as
The target frame.In this way, after filtering out target frame in the video frame view can be generated based on the target frame filtered out
Frequency cover.It specifically, can be whole by the displaying content of the target frame if the quantity of the target frame filtered out is at least two frames
It is combined into video cover, to obtain the video cover mutually agreed with the theme of video.When progress content is integrated, mesh can be extracted
The key object in frame is marked, then according to certain array format and upper lower caldding layer grade, by each key object group of extraction
It is combined, to form a width video cover.For example, currently total have 10 target frames, then can be from this 10 targets
The facial expression of personage is extracted in frame, which can be as above-mentioned key object.Then it can will extract
Facial expression be incorporated on an image, to obtain final video cover.Certainly, if only filtering out a target frame,
It then can be by the target frame directly as video cover, to simplify the process for generating video cover.
It should be noted that each thread can also continue to itself after downloading pending data by multithreading
The data block of downloading carries out type identification and subsequent processing, such as in Fig. 4, after current thread has downloaded current data block,
It can identify the data type of the data block, if the data block is video data, which can be decoded and extracted
The step of video frame;If include in the data block is many pieces of image, successively these images can be decoded, and
Decoded image is sent in processing queue.
In one embodiment, it is contemplated that the video data in the generating means of input video cover may be from difference
Video, then extracting video frame from decoded video data in order to avoid entanglement occurs when generating video cover
Later, it can be video frame addition for characterizing the mark of the decoded video data, and the mark will be carried
Video frame be sent in the processing queue, so as to raw based on the video frame for having like-identified in the processing queue
At video cover.Wherein, described to identify the backstage digital number that can be video, it is also possible to be compiled according to the backstage number of video
Number character string obtained by Hash operation.The application does not limit the concrete form of the mark, as long as can regard one
Frequency is distinguished with other videos.
In one embodiment, referring to Fig. 6, the decoding process of video data and the decoding process of image data can
To pass through CPU (Central Processing Unit, central processing unit) and/or GPU (Graphics Processing
Unit, graphics processor) it completes.In practical applications, if decoding speed is too fast, and video frames/images in queue are handled
Processing speed is excessively slow, and the data that will lead in processing queue are overflowed, so that the portion that the partial video frame and decoding that extract obtain
Partial image abandoned, the video cover for eventually resulting in generation can not accurately characterize the content of video.Therefore, in this embodiment party
In formula, the processing speed to video frames/images in the decoding speed of CPU/GPU and processing queue is needed to be balanced.Specifically,
It can detecte the remaining space that video frames/images are not filled by processing queue described in current time, the remaining space is more, indicates
Decoding speed can be faster.In this way, current decoding speed can be determined based on the remaining space, so that working as according to described
After preceding decoding speed is decoded the pending data, exist always in the processing queue for accommodating the view sent
Frequency frame/image remaining space will not lead to do not have enough remaining spaces to accommodate in processing queue because of decoding too fast
The video frames/images sent.
In the present embodiment, it when determining current decoding speed based on the remaining space, can obtain in advance described
Handle the processed speed of video frames/images in queue, which can be used as decoded reference speed, if decoding speed with
The speed is consistent, then can guarantee that the video frames/images obtained after decoding can be disposed in time, not will cause number
According to redundancy.Simultaneously as processing queue in there are a certain amount of remaining spaces, then can on the basis of reference speed,
Decoding speed is properly increased, so that there can be video frames/images to be processed always in processing queue.In the present embodiment,
It can establish the preset association relationship between the remaining space and gain decoding speed in the processing queue, the gain decoding
Speed can be on the basis of the reference speed, additional increased speed amount.The remaining space and gain decoding speed
Between preset association relationship can be expressed as, with the reduction of remaining space, the gain decoding speed can be also gradually reduced,
Until being reduced to 0.In this way, can be determined and the current remaining space phase according to remaining space current in processing queue
Associated target gain decoding speed, and the processed speed of the video frames/images and the target gain can be decoded
The sum of speed, as current decoding speed.For example, the processed speed of video frames/images is 50 frame image per second, institute
Stating gain decoding speed is 10 frame image per second, then current decoding speed can be 60 frame image per second.The decoding
The adjusting of speed can realize that decoding speed is faster, the computing resource of demand by controlling the computing resource of CPU or GPU
It is more.
Referring to Fig. 7, the application also provides a kind of generating means of video cover, described device includes memory and processing
Device stores computer program in the memory and performs the steps of when the computer program is executed by the processor
S1: pending data is obtained, the pending data includes image data or video data;
S3: identifying the type of the pending data, if the pending data is video data, to the number to be processed
According to being decoded, and video frame is extracted from decoded video data;The video frame of extraction is sent to processing queue
In, to generate video cover based on the video frame in the processing queue;
S5: if the pending data is image data, the pending data is decoded, and by decoded figure
As being sent in the processing queue, to generate video cover based on the described image in the processing queue.
In one embodiment, it when the computer program is executed by the processor, also performs the steps of
If the pending data is image data, the pending data is decoded, and by decoded image
It is sent in the processing queue, to generate video cover based on the described image in the processing queue.
In one embodiment, it when the computer program is executed by the processor, also performs the steps of
According to appointed interval frame number, video frame is extracted from the decoded video data;Or
Scene switching frame is determined in the decoded video data, and using the scene switching frame as from the solution
The video frame extracted in video data after code.
In one embodiment, it when the computer program is executed by the processor, also performs the steps of
It is not filled by the remaining space of video frames/images in processing queue described in detection current time, and is based on the residue
Space determines current decoding speed, so that being decoded according to the current decoding speed to the pending data
Afterwards, there is the remaining space for accommodating the video frames/images sent in the processing queue.
In one embodiment, it when the computer program is executed by the processor, also performs the steps of
The mark for characterizing the decoded video data is added for the video frame, and the mark will be carried
Video frame is sent in the processing queue, so that generating view based on the video frame for having like-identified in the processing queue
Frequency cover.
In the present embodiment, the memory may include the physical unit for storing information, usually by information
It is stored again with the media using the methods of electricity, magnetic or optics after digitlization.Memory described in present embodiment again may be used
To include: to store the device of information, such as RAM, ROM in the way of electric energy;The device of information is stored in the way of magnetic energy, it is such as hard
Disk, floppy disk, tape, core memory, magnetic bubble memory, USB flash disk;Using the device of optical mode storage information, such as CD or DVD.
Certainly, there are also memories of other modes, such as quantum memory, graphene memory etc..
In the present embodiment, the processor can be implemented in any suitable manner.For example, the processor can be with
Take such as microprocessor or processor and storage can by (micro-) processor execute computer readable program code (such as
Software or firmware) computer-readable medium, logic gate, switch, specific integrated circuit (Application Specific
Integrated Circuit, ASIC), programmable logic controller (PLC) and the form etc. for being embedded in microcontroller.
The specific function that the generating means for the video cover that this specification embodiment provides, memory and processor are realized
Can, explanation can be contrasted with the aforementioned embodiments in this specification, and the technical effect of aforementioned embodiments can be reached,
Here it just repeats no more.
The application also provides a kind of computer storage medium, and computer program, institute are stored in the computer storage medium
When stating computer program and being executed by processor, perform the steps of
S1: pending data is obtained, the pending data includes image data or video data;
S3: identifying the type of the pending data, if the pending data is video data, to the number to be processed
According to being decoded, and video frame is extracted from decoded video data;The video frame of extraction is sent to processing queue
In, to generate video cover based on the video frame in the processing queue;
S5: if the pending data is image data, the pending data is decoded, and by decoded figure
As being sent in the processing queue, to generate video cover based on the described image in the processing queue.
Therefore the generating means of video cover provided by the present application, it is raw that video cover in the prior art can be extended
The function of forming apparatus can both be handled the image data of input, can also be handled the video data of input.Its
In, portion, can identify the data type of input in the device, can when identifying current data is video data
To be decoded to the video data, to obtain each video frame for including in video data.It is then possible to be obtained from decoding
Video frame in extract a certain number of video frames, these extract video frames can be sent directly to processing queue in, and
The process of progress image coding is not needed, handling the video frame in queue can be used for the production of subsequent video cover.By upper
As it can be seen that compared with the prior art, on the one hand technical solution provided by the present application extends the type of pending data, another party
Face after extracting video frame in decoded video data, is not needing to encode the video frame of extraction, but
It can be sent directly into processing queue and be handled, coding and the subsequent view to after coding are carried out to video frame to save
The process that frequency frame is decoded improves the formation efficiency of video cover while simplifying video data treatment process.
In the 1990s, the improvement of a technology can be distinguished clearly be on hardware improvement (for example,
Improvement to circuit structures such as diode, transistor, switches) or software on improvement (improvement for method flow).So
And with the development of technology, the improvement of current many method flows can be considered as directly improving for hardware circuit.
Designer nearly all obtains corresponding hardware circuit by the way that improved method flow to be programmed into hardware circuit.Cause
This, it cannot be said that the improvement of a method flow cannot be realized with hardware entities module.For example, programmable logic device
(Programmable Logic Device, PLD) (such as field programmable gate array (Field Programmable Gate
Array, FPGA)) it is exactly such a integrated circuit, logic function determines device programming by user.By designer
Voluntarily programming comes a digital display circuit " integrated " on a piece of PLD, designs and makes without asking chip maker
Dedicated IC chip.Moreover, nowadays, substitution manually makes IC chip, this programming is also used instead mostly " is patrolled
Volume compiler (logic compiler) " software realizes that software compiler used is similar when it writes with program development,
And the source code before compiling also write by handy specific programming language, this is referred to as hardware description language
(Hardware Description Language, HDL), and HDL is also not only a kind of, but there are many kind, such as ABEL
(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description
Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL
(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby
Hardware Description Language) etc., VHDL (Very-High-Speed is most generally used at present
Integrated Circuit Hardware Description Language) and Verilog2.Those skilled in the art
It will be apparent to the skilled artisan that only needing method flow slightly programming in logic and being programmed into integrated circuit with above-mentioned several hardware description languages
In, so that it may it is readily available the hardware circuit for realizing the logical method process.
It is also known in the art that in addition in a manner of pure computer readable program code realization device and computer deposit
Other than storage media, completely can by by method and step carry out programming in logic come so that device and computer storage medium with logic
The form of door, switch, specific integrated circuit, programmable logic controller (PLC) and insertion microcontroller etc. realizes identical function.Cause
This this device and computer storage medium are considered a kind of hardware component, and to including for realizing various in it
The device of function can also be considered as the structure in hardware component.Or even, it can will be regarded for realizing the device of various functions
For either the software module of implementation method can be the structure in hardware component again.
As seen through the above description of the embodiments, those skilled in the art can be understood that the application can
It realizes by means of software and necessary general hardware platform.Based on this understanding, the technical solution essence of the application
On in other words the part that contributes to existing technology can be embodied in the form of software products, the computer software product
It can store in storage medium, such as ROM/RAM, magnetic disk, CD, including some instructions are used so that a computer equipment
(can be personal computer, server or the network equipment etc.) executes each embodiment of the application or embodiment
Method described in certain parts.
Each embodiment in this specification is described in a progressive manner, same and similar between each embodiment
Part may refer to each other, what each embodiment stressed is the difference with other embodiments.In particular, needle
For the embodiment of device and computer storage medium, it is referred to the introduction control solution of the embodiment of preceding method
It releases.
The application can describe in the general context of computer-executable instructions executed by a computer, such as program
Module.Generally, program module includes routines performing specific tasks or implementing specific abstract data types, programs, objects, group
Part, data structure etc..The application can also be practiced in a distributed computing environment, in these distributed computing environments, by
Task is executed by the connected remote processing devices of communication network.In a distributed computing environment, program module can be with
In the local and remote computer storage media including storage equipment.
Although depicting the application by embodiment, it will be appreciated by the skilled addressee that there are many deformations by the application
With variation without departing from spirit herein, it is desirable to which the attached claims include these deformations and change without departing from the application
Spirit.
Claims (16)
1. a kind of generation method of video cover, which is characterized in that the described method includes:
Pending data is obtained, the pending data includes image data or video data;
It identifies the type of the pending data, if the pending data is video data, the pending data is carried out
Decoding, and video frame is extracted from decoded video data;The video frame of extraction is sent in processing queue, with base
The video frame in the processing queue generates video cover;
If the pending data is image data, the pending data is decoded, and decoded image is sent
Extremely in the processing queue, to generate video cover based on the described image in the processing queue.
2. the method according to claim 1, wherein from decoded video data extract video frame after,
The method also includes:
The mark for characterizing the decoded video data, and the video that the mark will be carried are added for the video frame
Frame is sent in the processing queue, so that generating video envelope based on the video frame for having like-identified in the processing queue
Face.
3. the method according to claim 1, wherein further including character description information in the pending data;
Correspondingly, generating video cover based on the video frame in the processing queue includes:
For the video frame scene set label, and theme label is extracted from the character description information;
According to the relevance between the scene tag and the theme label, target frame is filtered out from the video frame, and
Displaying content based on the target frame generates video cover.
4. according to the method described in claim 3, it is characterized in that, filtering out target frame from the video frame and including:
The similarity between the scene tag and the theme label is calculated, and the similarity of calculating is greater than or is waited
It is determined as the target frame in the corresponding video frame of scene tag of specified similarity threshold;Or it will be provided with maximum similarity
The corresponding video frame of scene tag is determined as the target frame.
5. the method according to claim 1, wherein extraction video frame includes: from decoded video data
According to appointed interval frame number, video frame is extracted from the decoded video data;
Or
Scene switching frame is determined in the decoded video data, and using the scene switching frame as after the decoding
Video data in the video frame that extracts.
6. according to the method described in claim 5, it is characterized in that, determining scene switching in the decoded video data
Frame includes:
Reference frame is determined in the decoded video data, and successively calculate the video frame after the reference frame with it is described
Similarity between reference frame;
If the similarity between current video frame and the reference frame in the decoded video data is less than or equal to
When specified threshold, the current video frame is determined as a scene switching frame;
Using the current video frame as new reference frame, and successively calculate the video frame after the new reference frame with it is described
Similarity between new reference frame determines next scene switching frame with the result according to calculating.
7. the method according to claim 1, wherein the pending data is downloaded from Resource Server, and
And the pending data is divided into multiple data blocks in the Resource Server and is stored;Correspondingly, it obtains wait locate
Managing data includes:
The index list of the pending data is downloaded from the Resource Server, includes described in the index list wait locate
Manage the storage mark of data block in data;
Open up at least two processing threads, and according to the index list, for at least two processing threads be respectively configured to
The storage of processing identifies;
Thread parallel, which is handled, by described at least two downloads the data block that the storage mark to be processed is directed toward.
8. the method according to claim 1, wherein when being decoded to the pending data, the side
Method further include:
It is not filled by the remaining space of video frames/images in processing queue described in detection current time, and is based on the remaining space
Determine current decoding speed, so that after being decoded according to the current decoding speed to the pending data, institute
State the remaining space existed in processing queue for accommodating the video frames/images sent.
9. according to the method described in claim 8, it is characterized in that, determining current decoding speed packet based on the remaining space
It includes:
Obtain the processed speed of video frames/images in the processing queue;
It is determining to be remained with current in the processing queue according to the preset association relationship between remaining space and gain decoding speed
The associated target gain decoding speed of complementary space;
By the sum of the processed speed of the video frames/images and the target gain decoding speed, as current decoding speed
Degree.
10. a kind of generating means of video cover, which is characterized in that described device includes memory and processor, the storage
Computer program is stored in device to perform the steps of when the computer program is executed by the processor
Pending data is obtained, the pending data includes image data or video data;
It identifies the type of the pending data, if the pending data is video data, the pending data is carried out
Decoding, and video frame is extracted from decoded video data;The video frame of extraction is sent in processing queue, with base
The video frame in the processing queue generates video cover;
If the pending data is image data, the pending data is decoded, and decoded image is sent
Extremely in the processing queue, to generate video cover based on the described image in the processing queue.
11. device according to claim 10, which is characterized in that when the computer program is executed by the processor,
Also perform the steps of
The mark for characterizing the decoded video data, and the video that the mark will be carried are added for the video frame
Frame is sent in the processing queue, so that generating video envelope based on the video frame for having like-identified in the processing queue
Face.
12. device according to claim 10, which is characterized in that further include verbal description letter in the pending data
Breath;Correspondingly, it when the computer program is executed by the processor, also performs the steps of
For the video frame scene set label, and theme label is extracted from the character description information;
According to the relevance between the scene tag and the theme label, target frame is filtered out from the video frame, and
Displaying content based on the target frame generates video cover.
13. device according to claim 12, which is characterized in that when the computer program is executed by the processor,
Also perform the steps of
The similarity between the scene tag and the theme label is calculated, and the similarity of calculating is greater than or is waited
It is determined as the target frame in the corresponding video frame of scene tag of specified similarity threshold;Or it will be provided with maximum similarity
The corresponding video frame of scene tag is determined as the target frame.
14. device according to claim 10, which is characterized in that when the computer program is executed by the processor,
Also perform the steps of
According to appointed interval frame number, video frame is extracted from the decoded video data;Or
Scene switching frame is determined in the decoded video data, and using the scene switching frame as after the decoding
Video data in the video frame that extracts.
15. device according to claim 10, which is characterized in that when the computer program is executed by the processor,
Also perform the steps of
It is not filled by the remaining space of video frames/images in processing queue described in detection current time, and is based on the remaining space
Determine current decoding speed, so that after being decoded according to the current decoding speed to the pending data, institute
State the remaining space existed in processing queue for accommodating the video frames/images sent.
16. a kind of computer storage medium, which is characterized in that computer program is stored in the computer storage medium, it is described
When computer program is executed by processor, perform the steps of
Pending data is obtained, the pending data includes image data or video data;
It identifies the type of the pending data, if the pending data is video data, the pending data is carried out
Decoding, and video frame is extracted from decoded video data;The video frame of extraction is sent in processing queue, with base
The video frame in the processing queue generates video cover;
If the pending data is image data, the pending data is decoded, and decoded image is sent
Extremely in the processing queue, to generate video cover based on the described image in the processing queue.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810286238.3A CN110324706B (en) | 2018-03-30 | 2018-03-30 | Video cover generation method and device and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810286238.3A CN110324706B (en) | 2018-03-30 | 2018-03-30 | Video cover generation method and device and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110324706A true CN110324706A (en) | 2019-10-11 |
CN110324706B CN110324706B (en) | 2022-03-04 |
Family
ID=68112027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810286238.3A Active CN110324706B (en) | 2018-03-30 | 2018-03-30 | Video cover generation method and device and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110324706B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110856037A (en) * | 2019-11-22 | 2020-02-28 | 北京金山云网络技术有限公司 | Video cover determination method and device, electronic equipment and readable storage medium |
CN111491182A (en) * | 2020-04-23 | 2020-08-04 | 百度在线网络技术(北京)有限公司 | Method and device for video cover storage and analysis |
CN111654673A (en) * | 2020-06-01 | 2020-09-11 | 杭州海康威视系统技术有限公司 | Video cover updating method and device and storage medium |
CN111901679A (en) * | 2020-08-10 | 2020-11-06 | 广州繁星互娱信息科技有限公司 | Method and device for determining cover image, computer equipment and readable storage medium |
CN111918025A (en) * | 2020-06-29 | 2020-11-10 | 北京大学 | Scene video processing method and device, storage medium and terminal |
CN112434234A (en) * | 2020-05-15 | 2021-03-02 | 上海哔哩哔哩科技有限公司 | Frame extraction method and system based on browser |
CN112911337A (en) * | 2021-01-28 | 2021-06-04 | 北京达佳互联信息技术有限公司 | Method and device for configuring video cover pictures of terminal equipment |
CN113051236A (en) * | 2021-03-09 | 2021-06-29 | 北京沃东天骏信息技术有限公司 | Method and device for auditing video and computer-readable storage medium |
CN113067989A (en) * | 2021-06-01 | 2021-07-02 | 神威超算(北京)科技有限公司 | Data processing method and chip |
CN113301422A (en) * | 2021-05-24 | 2021-08-24 | 腾讯音乐娱乐科技(深圳)有限公司 | Method, terminal and storage medium for acquiring video cover |
CN116777914A (en) * | 2023-08-22 | 2023-09-19 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and computer readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105094513A (en) * | 2014-05-23 | 2015-11-25 | 腾讯科技(北京)有限公司 | User avatar setting method and apparatus as well as electronic device |
US20160378307A1 (en) * | 2015-06-26 | 2016-12-29 | Rovi Guides, Inc. | Systems and methods for automatic formatting of images for media assets based on user profile |
CN106572380A (en) * | 2016-10-19 | 2017-04-19 | 上海传英信息技术有限公司 | User terminal and video dynamic thumbnail generating method |
CN106713964A (en) * | 2016-12-05 | 2017-05-24 | 乐视控股(北京)有限公司 | Method of generating video abstract viewpoint graph and apparatus thereof |
CN107832724A (en) * | 2017-11-17 | 2018-03-23 | 北京奇虎科技有限公司 | The method and device of personage's key frame is extracted from video file |
-
2018
- 2018-03-30 CN CN201810286238.3A patent/CN110324706B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105094513A (en) * | 2014-05-23 | 2015-11-25 | 腾讯科技(北京)有限公司 | User avatar setting method and apparatus as well as electronic device |
US20160378307A1 (en) * | 2015-06-26 | 2016-12-29 | Rovi Guides, Inc. | Systems and methods for automatic formatting of images for media assets based on user profile |
CN106572380A (en) * | 2016-10-19 | 2017-04-19 | 上海传英信息技术有限公司 | User terminal and video dynamic thumbnail generating method |
CN106713964A (en) * | 2016-12-05 | 2017-05-24 | 乐视控股(北京)有限公司 | Method of generating video abstract viewpoint graph and apparatus thereof |
CN107832724A (en) * | 2017-11-17 | 2018-03-23 | 北京奇虎科技有限公司 | The method and device of personage's key frame is extracted from video file |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110856037A (en) * | 2019-11-22 | 2020-02-28 | 北京金山云网络技术有限公司 | Video cover determination method and device, electronic equipment and readable storage medium |
CN111491182A (en) * | 2020-04-23 | 2020-08-04 | 百度在线网络技术(北京)有限公司 | Method and device for video cover storage and analysis |
CN112434234B (en) * | 2020-05-15 | 2023-09-01 | 上海哔哩哔哩科技有限公司 | Frame extraction method and system based on browser |
CN112434234A (en) * | 2020-05-15 | 2021-03-02 | 上海哔哩哔哩科技有限公司 | Frame extraction method and system based on browser |
CN111654673A (en) * | 2020-06-01 | 2020-09-11 | 杭州海康威视系统技术有限公司 | Video cover updating method and device and storage medium |
CN111918025A (en) * | 2020-06-29 | 2020-11-10 | 北京大学 | Scene video processing method and device, storage medium and terminal |
CN111901679A (en) * | 2020-08-10 | 2020-11-06 | 广州繁星互娱信息科技有限公司 | Method and device for determining cover image, computer equipment and readable storage medium |
CN112911337A (en) * | 2021-01-28 | 2021-06-04 | 北京达佳互联信息技术有限公司 | Method and device for configuring video cover pictures of terminal equipment |
CN112911337B (en) * | 2021-01-28 | 2023-06-20 | 北京达佳互联信息技术有限公司 | Method and device for configuring video cover pictures of terminal equipment |
CN113051236A (en) * | 2021-03-09 | 2021-06-29 | 北京沃东天骏信息技术有限公司 | Method and device for auditing video and computer-readable storage medium |
CN113301422A (en) * | 2021-05-24 | 2021-08-24 | 腾讯音乐娱乐科技(深圳)有限公司 | Method, terminal and storage medium for acquiring video cover |
CN113067989A (en) * | 2021-06-01 | 2021-07-02 | 神威超算(北京)科技有限公司 | Data processing method and chip |
CN116777914A (en) * | 2023-08-22 | 2023-09-19 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and computer readable storage medium |
CN116777914B (en) * | 2023-08-22 | 2023-11-07 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110324706B (en) | 2022-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110324706A (en) | A kind of generation method, device and the computer storage medium of video cover | |
Duan et al. | Video coding for machines: A paradigm of collaborative compression and intelligent analytics | |
Yue et al. | Cloud-based image coding for mobile devices—Toward thousands to one compression | |
Duan et al. | Compact descriptors for video analysis: The emerging MPEG standard | |
WO2021232969A1 (en) | Action recognition method and apparatus, and device and storage medium | |
CN111954053B (en) | Method for acquiring mask frame data, computer equipment and readable storage medium | |
US10448054B2 (en) | Multi-pass compression of uncompressed data | |
Wang et al. | Towards analysis-friendly face representation with scalable feature and texture compression | |
TWI712316B (en) | Method and device for generating video summary | |
WO2022188644A1 (en) | Word weight generation method and apparatus, and device and medium | |
Wang et al. | A surveillance video analysis and storage scheme for scalable synopsis browsing | |
CN116233445B (en) | Video encoding and decoding processing method and device, computer equipment and storage medium | |
WO2020070387A1 (en) | A method and apparatus for training a neural network used for denoising | |
CN112804558A (en) | Video splitting method, device and equipment | |
CN110691246B (en) | Video coding method and device and electronic equipment | |
Khan et al. | Sparse to dense depth completion using a generative adversarial network with intelligent sampling strategies | |
US10924637B2 (en) | Playback method, playback device and computer-readable storage medium | |
Han | Texture Image Compression Algorithm Based on Self‐Organizing Neural Network | |
US11095901B2 (en) | Object manipulation video conference compression | |
CN116662604A (en) | Video abstraction method based on layered Transformer | |
Zhai | Auto-encoder generative adversarial networks | |
CN103139566A (en) | Method for efficient decoding of variable length codes | |
CN112714336B (en) | Video segmentation method and device, electronic equipment and computer readable storage medium | |
CN111383289A (en) | Image processing method, image processing device, terminal equipment and computer readable storage medium | |
CN105306961B (en) | A kind of method and device for taking out frame |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200512 Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Applicant after: Alibaba (China) Co.,Ltd. Address before: 100080 Beijing Haidian District city Haidian street A Sinosteel International Plaza No. 8 block 5 layer A, C Applicant before: Youku network technology (Beijing) Co., Ltd |
|
GR01 | Patent grant | ||
GR01 | Patent grant |