CN109635747A - The automatic abstracting method of video cover and device - Google Patents
The automatic abstracting method of video cover and device Download PDFInfo
- Publication number
- CN109635747A CN109635747A CN201811532062.1A CN201811532062A CN109635747A CN 109635747 A CN109635747 A CN 109635747A CN 201811532062 A CN201811532062 A CN 201811532062A CN 109635747 A CN109635747 A CN 109635747A
- Authority
- CN
- China
- Prior art keywords
- frame
- characteristic point
- video
- rank
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
- G06V20/47—Detecting features for summarising video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
Abstract
The invention discloses a kind of automatic abstracting method of video cover and devices, wherein includes: that S10 unzips it video to obtain sequence frame in the automatic abstracting method of video cover;S20 successively extracts the local feature region of each frame, and calculates the intensity of each characteristic point;S30 is directed to every frame, retains the stronger characteristic point of preset quantity intensity;S40 carries out the quantization of K rank to the characteristic point of reservation;S50 is directed to every frame, counts the quantity of every rank characteristic point in K rank characteristic point, obtains the feature vector of each frame;S60 carries out Kmeans cluster to the feature vector of all frames, and will assemble the most one kind of quantity as the home court scape of video;S70, which is extracted, corresponds to the smallest frame of class center point apart from home court scape, and as the surface plot of video, realize automatic, the accurate extraction of video surface plot, and other videos that the surface plot extracted can embody the content of video to the greatest extent, be different from same type.
Description
Technical field
The present invention relates to technical field of data processing more particularly to a kind of automatic abstracting methods of video cover and device.
Background technique
When video is placed on webpage or cell phone application (application) on be shown when, it usually needs provide one for video
Surface plot.In order to provide the user with more information as far as possible, the selection of surface plot generally requires two requirements of satisfaction: as far as possible
The content of video can be embodied and be different from the surface plot of other videos of same type as far as possible.
When artistic level is more demanding, such as film poster, surface plot usually by manually carrying out artistic creation, obtain compared with
For exquisite poster;When artistic level requires lower, such as network video, surface plot from video usually by manually directly extracting
One frame image obtains, and this method high labor cost, speed are slow and can not carry out extensive automatic processing.
Summary of the invention
In view of the above shortcomings of the prior art, the present invention provides a kind of automatic abstracting method of video cover and device, have
Effect solves the technical issues of cannot extracting the surface plot for best embodying video content automatically in the prior art.
To achieve the goals above, the invention is realized by the following technical scheme:
A kind of automatic abstracting method of video cover, comprising:
S10 unzips it video to obtain sequence frame;
S20 successively extracts the local feature region of each frame, and calculates the intensity of each characteristic point;
S30 is directed to every frame, retains the stronger characteristic point of preset quantity intensity;
S40 carries out the quantization of K rank to the characteristic point of reservation;
S50 is directed to every frame, counts the quantity of every rank characteristic point in K rank characteristic point, obtains the feature vector of each frame;
S60 carries out Kmeans cluster to the feature vector of all frames, and will assemble the most one kind of quantity as video
Home court scape;
S70, which is extracted, corresponds to the smallest frame of class center point apart from home court scape, and as the surface plot of video.
It is further preferred that after step slo further include:
S11 is extracted according to the sequence frame that preset rules obtain decompression, obtains the set of frame;
In step S20, the local feature region of each frame in set is successively extracted.
It is further preferred that in step S20, specifically: the SIFT feature or SURF characteristic point of each frame are successively extracted,
And calculate the intensity of each characteristic point.
It is further preferred that in step s 40, further comprising:
S41 carries out Kmeans cluster to the characteristic point of reservation;
S42 quantifies characteristic point according to the central point of all categories that cluster obtains.
It is further preferred that in step s 40, specifically: it is carried out according to characteristic point of the preset K rank characteristic point to reservation
The quantization of K rank.
The present invention also provides a kind of automatic draw-out devices of video cover, comprising:
Video decompression module obtains sequence frame for unziping it to video;
Characteristic extracting module for successively extracting the local feature region for each frame that video decompression module obtains, and calculates every
The intensity of a characteristic point;
Characteristic point reservation module, according to the calculated result of characteristic extracting module, for every frame, retain preset quantity intensity compared with
Strong characteristic point;
Quantization modules, the characteristic point for retaining characteristic point reservation module carry out the quantization of K rank;
Statistical module counts every rank feature in K rank characteristic point for every frame for the quantized result according to quantization modules
The quantity of point, obtains the feature vector of each frame;
Cluster module, the feature vector of all frames for obtaining to statistical module counts carry out Kmeans cluster, and will
Assemble home court scape of the most one kind of quantity as video;
Surface plot extraction module corresponds to the smallest frame of class center point apart from home court scape for extracting, and as
The surface plot of video.
It is further preferred that in the automatic draw-out device of video cover further include:
Frame abstraction module, the sequence frame for being obtained according to preset rules to decompression extract, and obtain the set of frame;
In characteristic extracting module, the local feature region of each frame in the set that frame abstraction module extracts successively is extracted, and is counted
Calculate the intensity of each characteristic point.
It is further preferred that the SIFT feature or SURF characteristic point of each frame are successively extracted in characteristic extracting module, and
Calculate the intensity of each characteristic point.
It is further preferred that in quantization modules, comprising:
Cluster cell, for carrying out Kmeans cluster to the characteristic point retained characteristic point reservation module;
Quantifying unit, the central point of all categories for being clustered according to cluster cell quantify characteristic point.
It is further preferred that carrying out K rank amount according to characteristic point of the preset K rank characteristic point to reservation in quantization modules
Change.
In the automatic abstracting method of video cover provided by the invention and device, to the part of video compression, each frame of extraction
Feature simultaneously remains in each frame after the characteristic point of the stronger preset quantity of intensity, K rank quantization is carried out to it, and then obtain each frame
The feature vector of quantization clusters quantization characteristic vector and takes the home court for assembling the most classification of sample number as video
Scape extracts and corresponds to surface plot of the smallest frame of class center point as video apart from home court scape, realizes oneself of video surface plot
Other views that dynamic, the accurate surface plot for extracting, and extracting can embody the content of video to the greatest extent, be different from same type
Frequently.
Detailed description of the invention
In conjunction with attached drawing, and by reference to following detailed description, it will more easily have more complete understanding to the present invention
And its adjoint advantage and feature is more easily to understand, in which:
Fig. 1 is the automatic abstracting method flow diagram of video cover in the present invention;
Fig. 2 is SIFT feature extraction process schematic diagram in the present invention;
Fig. 3 is the automatic draw-out device structural schematic diagram of video cover in the present invention.
The automatic draw-out device of 100- video cover, 110- video decompression module, 120- characteristic extracting module, 130- characteristic point
Reservation module, 140- quantization modules, 150- statistical module, 160- cluster module, 170- surface plot extraction module.
Specific embodiment
To keep the contents of the present invention more clear and easy to understand, below in conjunction with Figure of description, the contents of the present invention are made into one
Walk explanation.Certainly the invention is not limited to the specific embodiment, general replacement known to those skilled in the art
It is included within the scope of protection of the present invention.
It is as shown in Figure 1 the automatic abstracting method flow diagram of video cover provided by the invention, it can be seen from the figure that
Include: in the automatic abstracting method of video cover
S10 unzips it video to obtain sequence frame;
S20 successively extracts the local feature region of each frame, and calculates the intensity of each characteristic point;
S30 is directed to every frame, retains the stronger characteristic point of preset quantity intensity;
S40 carries out the quantization of K rank to the characteristic point of reservation;
S50 is directed to every frame, counts the quantity of every rank characteristic point in K rank characteristic point, obtains the feature vector of each frame;
S60 carries out Kmeans cluster to the feature vector of all frames, and will assemble the most one kind of quantity as video
Home court scape;
S70, which is extracted, corresponds to the smallest frame of class center point apart from home court scape, and as the surface plot of video.
In the method, after the video V for obtaining needing to extract cover page, with being unziped it to obtain frame sequence
Column, V=(F1,F2,F3,...,Fn), wherein n indicates the quantity of frame in video, FnIndicate the n-th frame in video.
After obtaining frame sequence, the local feature region of each frame is successively extracted, such as extracts SIFT feature or SURF characteristic point
Deng, and calculate the intensity of each characteristic point.The algorithm extracted for local feature region is not specifically limited here, be can be used and is appointed
The algorithm of meaning extracts, and for extracting SIFT feature, multiple SIFT are calculated by SIFT algorithm in one frame of image
Corresponding office can be calculated according to the vector of 128 dimension in characteristic point (vector description that each characteristic point is tieed up by one 128)
The intensity of portion's characteristic point.As shown in Figure 2, wherein Fig. 2 (a) is an original frame image, extracts the frame image using SIFT algorithm
SIFT feature and calculate the intensity of each SIFT feature, as shown in Fig. 2 (b), specifically, the great circle in figure indicate intensity compared with
Strong SIFT feature, ringlet indicate the weaker SIFT feature of intensity.
It is calculated after the intensity of each characteristic point (local feature region of extraction), it is stronger to retain intensity in every frame image
Characteristic point, that is, be directed to every frame image, the intensity of all characteristic points is ranked up, according to the size of preset quantity to part
Characteristic point is retained.As in one example, preset quantity 100, then preceding 100 characteristic points in the sequence of keeping characteristics point intensity
(sequence is by by force to weak).
After the characteristic point for remaining identical quantity for every frame image, the quantization of K rank is carried out to each characteristic point immediately.
For the method for quantization, in one embodiment, realized using the method for cluster, it is specific: first against all frames
The characteristic point of reservation carries out Kmeans cluster, after obtaining K cluster, the central point of each cluster is calculated, with this according to each
The central point of cluster carries out the quantization of K rank to the characteristic point that all frames retain, and the corresponding characteristic point of each classification is quantified as such
The value of other central point.In one example, it is assumed that carry out the K cluster that Kmeans is clustered for the characteristic point that all frames retain
Respectively p1,p2,...,pk(pkIndicate k-th of classification), the central point vector of each classification is z1,z2,...,zk(zkIndicate kth
A classification pkCentral point vector, each central point vector corresponds to single order characteristic point, total k rank), will be every during quantization
The corresponding characteristic point of a classification is quantified as the central point vector of the category, will such as cluster as classification pkCharacteristic point be quantified as vector
zk。
In another embodiment, the quantization of K rank is carried out according to characteristic point of the preparatory preset K rank characteristic point to reservation, e.g.,
K rank characteristic point is set as q in advance1,q2,...,qk, and rule is set, the characteristic point of a certain specific region is quantified some
The special website in one region is such as quantified as q by characteristic point1, the characteristic point in another region is quantified as q2Deng realizing all characteristic points
K rank quantization.It is noted that we are not specifically limited the method for quantization here, theoretically for, as long as it can be according to
Certain rule quantifies the characteristic point of reservation, that is, includes in the content of the present invention.
After quantization is completed, the characteristic point retained for every frame is counted, and obtains every rank characteristic point in K rank characteristic point
Quantity, and then the feature vector of each frame is obtained according to the data of statistics.Specifically, if K rank characteristic point is respectively q1,q2,...,qk,
In a frame, it counts on and is quantified as the first rank feature q1Characteristic point quantity be N1, it is quantified as second-order feature q2Feature points
Amount is N2..., it is quantified as kth rank feature qkCharacteristic point quantity be Nk, then the feature vector of the frame is (N1,N2,...,Nk), with
This obtains the feature vector of every frame.
After obtaining the feature vector of every frame, Kmeans cluster is carried out to the feature vector of each frame, obtains C classification, C=
(A1,A2,...,Ac), wherein AcIndicate c-th of classification.To the sample size (each frame corresponding feature vector) in each classification
It is counted, using the most one kind of sample size as the home court scape of video V, and obtains the feature vector of category central point
(N1c,N2c,...,Nkc)。
Finally, calculating all frames and feature vector (N in video V1c,N2c,...,Nkc) Euclidean distance and be compared,
Using the nearest frame of distance as the surface plot of video, the automatic extraction of video surface plot is completed.
In other embodiments, if the frame sequence quantity of video is more huge, the frame sequence of video is obtained in decompression
Later, it is extracted according to the sequence frame that preset rules obtain decompression, obtains the set of frame, include according in the set later
Frame carry out surface plot automatic extraction, to reduce calculation amount.For preset rules, can be set according to the actual situation,
Such as, a frame, all frames that may range from video of extraction, or the partial frame in video are extracted every 1s (second)
(representative one section of content in video).
The present invention also provides a kind of automatic draw-out devices of video cover, as shown in figure 3, the video cover extracts dress automatically
It sets in 100 and includes:
Video decompression module 110 obtains sequence frame for unziping it to video;
Characteristic extracting module 120, for successively extracting the local feature region for each frame that video decompression module 110 obtains, and
Calculate the intensity of each characteristic point;
Characteristic point reservation module 130 retains preset quantity for every frame according to the calculated result of characteristic extracting module 120
The stronger characteristic point of intensity;
Quantization modules 140, the characteristic point for retaining characteristic point reservation module 130 carry out the quantization of K rank;
Statistical module 150, for every frame, counts every in K rank characteristic point for the quantized result according to quantization modules 140
The quantity of rank characteristic point obtains the feature vector of each frame;
Cluster module 160, it is poly- that the feature vector for counting obtained all frames to statistical module 150 carries out Kmeans
Class, and the most one kind of quantity will be assembled as the home court scape of video;
Surface plot extraction module 170 corresponds to the smallest frame of class center point apart from home court scape for extracting, and is made
For the surface plot of video.
In the device 100, after the video V for obtaining needing to extract cover page, video decompression module 110 with i.e. by its into
Row decompression obtains frame sequence, V=(F1,F2,F3,...,Fn), wherein n indicates the quantity of frame in video, FnIt indicates in video
N-th frame.
After obtaining frame sequence, characteristic extracting module 120 successively extracts the local feature region of each frame, such as extracts SIFT feature
Point or SURF characteristic point etc., and calculate the intensity of each characteristic point.The algorithm extracted for local feature region is not done specifically here
It limits, arbitrary algorithm can be used and extract, for extracting SIFT feature, pass through SIFT algorithm meter in one frame of image
Calculation obtains multiple SIFT features (vector description that each characteristic point is tieed up by one 128) according to the vector of 128 dimension
The intensity of corresponding topical characteristic point is calculated.As shown in Figure 2, wherein Fig. 2 (a) is an original frame image, is calculated using SIFT
Method extracts the SIFT feature of the frame image and calculates the intensity of each SIFT feature, as shown in Fig. 2 (b), specifically, in figure
Great circle indicates the stronger SIFT feature of intensity, and ringlet indicates the weaker SIFT feature of intensity.
It is calculated after the intensity of each characteristic point (local feature region of extraction), characteristic point reservation module 130 retains
The stronger characteristic point of intensity in every frame image, that is, be directed to every frame image, the intensity of all characteristic points be ranked up, according to pre-
If the size of quantity retains local feature region.As in one example, preset quantity 100, then keeping characteristics point intensity
Preceding 100 characteristic points (sequence is by by force to weak) in sequence, here without limitation to the occurrence of preset quantity, can be according to reality
Situation setting, it is even more big to be such as set as 50,80,120,150,200.
After the characteristic point for remaining identical quantity for every frame image, quantization modules 140 carry out K to each characteristic point immediately
Rank quantization.
For the method for quantization, in one embodiment, realized using the method for cluster, it is specific: cluster cell needle first
Kmeans cluster is carried out to the characteristic point that all frames retain, after obtaining K cluster, the central point of each cluster is calculated, with
This quantifying unit carries out the quantization of K rank to the characteristic point that all frames retain according to the central point of each cluster, and each classification is corresponding
Characteristic point is quantified as the value of category central point.In one example, it is assumed that carry out Kmeans for the characteristic point that all frames retain
Clustering K obtained cluster is respectively p1,p2,...,pk(pkIndicate k-th of classification), the central point vector of each classification is z1,
z2,...,zk(zkIndicate k-th of classification pkCentral point vector, each central point vector corresponds to single order characteristic point, total k rank),
During quantization, the corresponding characteristic point of each classification is quantified as to the central point vector of the category, will such as be clustered as classification pk
Characteristic point be quantified as vector zk。
In another embodiment, the quantization of K rank is carried out according to characteristic point of the preparatory preset K rank characteristic point to reservation, e.g.,
K rank characteristic point is set as q in advance1,q2,...,qk, and rule is set, the characteristic point of a certain specific region is quantified some
The special website in one region is such as quantified as q by characteristic point1, the characteristic point in another region is quantified as q2Deng realizing all characteristic points
K rank quantization.It is noted that we are not specifically limited the method for quantization here, theoretically for, as long as it can be according to
Certain rule quantifies the characteristic point of reservation, that is, includes in the content of the present invention.
After quantization is completed, the characteristic point that statistical module 150 retains for every frame is counted, and is obtained in K rank characteristic point
The quantity of every rank characteristic point, and then the feature vector of each frame is obtained according to the data of statistics.Specifically, if K rank characteristic point is respectively
q1,q2,...,qk, in a frame, count on and be quantified as the first rank feature q1Characteristic point quantity be N1, it is quantified as second-order feature
q2Characteristic point quantity be N2..., it is quantified as kth rank feature qkCharacteristic point quantity be Nk, then the feature vector of the frame is (N1,
N2,...,Nk), the feature vector of every frame is obtained with this.
After obtaining the feature vector of every frame, cluster module 160 carries out Kmeans cluster to the feature vector of each frame, obtains
C classification, C=(A1,A2,...,Ac), wherein AcIndicate c-th of classification.To the sample size in each classification, (each frame is corresponding
Feature vector) counted, using the most one kind of sample size as the home court scape of video V, and obtain category central point
Feature vector (N1c,N2c,...,Nkc)。
Finally, surface plot extraction module 170 calculates all frames and feature vector (N in video V1c,N2c,...,Nkc) Europe
Family name's distance is simultaneously compared, and using the nearest frame of distance as the surface plot of video, completes the automatic extraction of video surface plot.
In other embodiments, if the frame sequence quantity of video is more huge, in the automatic draw-out device 100 of video cover
Frame abstraction module is also set, after decompression obtains the frame sequence of video, frame abstraction module is according to preset rules to decompressing
To sequence frame extracted, obtain the set of frame, later according to include in the set frame carry out surface plot automatic extraction,
To reduce calculation amount.It for preset rules, can be set according to the actual situation, e.g., extract a frame every 1s (second), extract
All frames that may range from video, or the partial frame (representative one section of content in video) in video.
Claims (10)
1. a kind of automatic abstracting method of video cover characterized by comprising
S10 unzips it video to obtain sequence frame;
S20 successively extracts the local feature region of each frame, and calculates the intensity of each characteristic point;
S30 is directed to every frame, retains the stronger characteristic point of preset quantity intensity;
S40 carries out the quantization of K rank to the characteristic point of reservation;
S50 is directed to every frame, counts the quantity of every rank characteristic point in K rank characteristic point, obtains the feature vector of each frame;
S60 carries out Kmeans cluster to the feature vector of all frames, and will assemble the most one kind of quantity as the home court of video
Scape;
S70, which is extracted, corresponds to the smallest frame of class center point apart from home court scape, and as the surface plot of video.
2. the automatic abstracting method of video cover as described in claim 1, which is characterized in that after step slo further include:
S11 is extracted according to the sequence frame that preset rules obtain decompression, obtains the set of frame;
In step S20, the local feature region of each frame in set is successively extracted.
3. the automatic abstracting method of video cover as described in claim 1, which is characterized in that in step S20, specifically: according to
The secondary SIFT feature or SURF characteristic point for extracting each frame, and calculate the intensity of each characteristic point.
4. the automatic abstracting method of video cover as described in claims 1 or 2 or 4, which is characterized in that in step s 40, into one
Step includes:
S41 carries out Kmeans cluster to the characteristic point of reservation;
S42 quantifies characteristic point according to the central point of all categories that cluster obtains.
5. the automatic abstracting method of video cover as described in claims 1 or 2 or 3, which is characterized in that in step s 40, specifically
Are as follows: the quantization of K rank is carried out according to characteristic point of the preset K rank characteristic point to reservation.
6. a kind of automatic draw-out device of video cover characterized by comprising
Video decompression module obtains sequence frame for unziping it to video;
Characteristic extracting module for successively extracting the local feature region for each frame that video decompression module obtains, and calculates each spy
Levy the intensity of point;
It is stronger to retain preset quantity intensity for every frame according to the calculated result of characteristic extracting module for characteristic point reservation module
Characteristic point;
Quantization modules, the characteristic point for retaining characteristic point reservation module carry out the quantization of K rank;
Statistical module, for every frame, counts every rank characteristic point in K rank characteristic point for the quantized result according to quantization modules
Quantity obtains the feature vector of each frame;
Cluster module, the feature vector of all frames for obtaining to statistical module counts carry out Kmeans cluster, and will aggregation
Home court scape of the most one kind of quantity as video;
Surface plot extraction module corresponds to the smallest frame of class center point apart from home court scape for extracting, and as video
Surface plot.
7. the automatic draw-out device of video cover as claimed in claim 6, which is characterized in that in the automatic draw-out device of video cover
In further include:
Frame abstraction module, the sequence frame for being obtained according to preset rules to decompression extract, and obtain the set of frame;
In characteristic extracting module, the local feature region of each frame in the set that frame abstraction module extracts successively is extracted, and is calculated every
The intensity of a characteristic point.
8. the automatic draw-out device of video cover as claimed in claim 6, which is characterized in that in characteristic extracting module, successively
The SIFT feature or SURF characteristic point of each frame are extracted, and calculates the intensity of each characteristic point.
9. the automatic draw-out device of video cover as described in claim 6 or 7 or 8, which is characterized in that in quantization modules, packet
It includes:
Cluster cell, for carrying out Kmeans cluster to the characteristic point retained characteristic point reservation module;
Quantifying unit, the central point of all categories for being clustered according to cluster cell quantify characteristic point.
10. the automatic draw-out device of video cover as described in claim 6 or 7 or 8, which is characterized in that in quantization modules, root
The quantization of K rank is carried out according to characteristic point of the preset K rank characteristic point to reservation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811532062.1A CN109635747A (en) | 2018-12-14 | 2018-12-14 | The automatic abstracting method of video cover and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811532062.1A CN109635747A (en) | 2018-12-14 | 2018-12-14 | The automatic abstracting method of video cover and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109635747A true CN109635747A (en) | 2019-04-16 |
Family
ID=66074026
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811532062.1A Pending CN109635747A (en) | 2018-12-14 | 2018-12-14 | The automatic abstracting method of video cover and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109635747A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021004247A1 (en) * | 2019-07-11 | 2021-01-14 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating video cover and electronic device |
CN113301422A (en) * | 2021-05-24 | 2021-08-24 | 腾讯音乐娱乐科技(深圳)有限公司 | Method, terminal and storage medium for acquiring video cover |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063706A (en) * | 2014-06-27 | 2014-09-24 | 电子科技大学 | Video fingerprint extraction method based on SURF algorithm |
CN107220585A (en) * | 2017-03-31 | 2017-09-29 | 南京邮电大学 | A kind of video key frame extracting method based on multiple features fusion clustering shots |
CN107527010A (en) * | 2017-07-13 | 2017-12-29 | 央视国际网络无锡有限公司 | A kind of method that video gene is extracted according to local feature and motion vector |
-
2018
- 2018-12-14 CN CN201811532062.1A patent/CN109635747A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063706A (en) * | 2014-06-27 | 2014-09-24 | 电子科技大学 | Video fingerprint extraction method based on SURF algorithm |
CN107220585A (en) * | 2017-03-31 | 2017-09-29 | 南京邮电大学 | A kind of video key frame extracting method based on multiple features fusion clustering shots |
CN107527010A (en) * | 2017-07-13 | 2017-12-29 | 央视国际网络无锡有限公司 | A kind of method that video gene is extracted according to local feature and motion vector |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021004247A1 (en) * | 2019-07-11 | 2021-01-14 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating video cover and electronic device |
CN113301422A (en) * | 2021-05-24 | 2021-08-24 | 腾讯音乐娱乐科技(深圳)有限公司 | Method, terminal and storage medium for acquiring video cover |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103984741B (en) | Customer attribute information extracting method and system thereof | |
CN104317959B (en) | Data digging method based on social platform and device | |
CN107894998B (en) | Video recommendation method and device | |
CN105005593B (en) | The scene recognition method and device of multi-user shared equipment | |
US8849798B2 (en) | Sampling analysis of search queries | |
CN103577593B (en) | A kind of video aggregation method and system based on microblog hot topic | |
CN105893443A (en) | Video recommendation method and apparatus, and server | |
CN108932451A (en) | Audio-video frequency content analysis method and device | |
CN103605714B (en) | The recognition methods of website abnormal data and device | |
CN104573304A (en) | User property state assessment method based on information entropy and cluster grouping | |
CN109429103B (en) | Method and device for recommending information, computer readable storage medium and terminal equipment | |
CN104484435B (en) | The method of alternate analysis user behavior | |
CN103763585A (en) | User characteristic information obtaining method and device and terminal device | |
CN105447147A (en) | Data processing method and apparatus | |
CN103020117B (en) | Service contrast method and service contrast system | |
CN111767430B (en) | Video resource pushing method, video resource pushing device and storage medium | |
CN109635747A (en) | The automatic abstracting method of video cover and device | |
CN107087160A (en) | A kind of Forecasting Methodology of the user experience quality based on BP Adaboost neutral nets | |
CN108197336B (en) | Video searching method and device | |
CN108447064A (en) | A kind of image processing method and device | |
CN107657030A (en) | Collect method, apparatus, terminal device and storage medium that user reads data | |
CN109829364A (en) | A kind of expression recognition method, device and recommended method, device | |
CN106126698B (en) | Retrieval pushing method and system based on Lucence | |
CN106303591A (en) | A kind of video recommendation method and device | |
CN112101692A (en) | Method and device for identifying poor-quality users of mobile Internet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |