CN105761263A - Video key frame extraction method based on shot boundary detection and clustering - Google Patents

Video key frame extraction method based on shot boundary detection and clustering Download PDF

Info

Publication number
CN105761263A
CN105761263A CN201610093299.9A CN201610093299A CN105761263A CN 105761263 A CN105761263 A CN 105761263A CN 201610093299 A CN201610093299 A CN 201610093299A CN 105761263 A CN105761263 A CN 105761263A
Authority
CN
China
Prior art keywords
frame
video
image
key frame
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610093299.9A
Other languages
Chinese (zh)
Inventor
姚万超
杨朝欢
蔡登�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201610093299.9A priority Critical patent/CN105761263A/en
Publication of CN105761263A publication Critical patent/CN105761263A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video key frame extraction method based on shot boundary detection and clustering, comprising the following steps: S1, reading a video, and extracting the image features of each frame of video; S2, calculating the image feature difference between each frame of video and the adjacent precious frame of video; S3, detecting the shot boundary through use of a sliding window adaptive method; S4, extracting the key frame of each shot through use of a clustering algorithm; and S5, calculating the weight of each key frame, selecting a plurality of higher-weight key frames, sorting the key frames according to the chronological order, and taking the selected key frames as the key frames of the video. According to the invention, by comparing the image features of the image frames, the shot boundary is detected accurately, and the key frame of each shot is extracted efficiently. N higher-weight key frames are taken as the key frames of the video according to the chronological order, and the key frames can effectively represent the whole video.

Description

A kind of video key frame extracting method based on shot boundary detector and cluster
Technical field
The present invention relates to technical field of video processing, be specifically related to a kind of video key frame extracting method based on shot boundary detector and cluster.
Background technology
Along with the fast development of internet multimedia, having every day the video of magnanimity to upload on the Internet, content based video retrieval system becomes problem demanding prompt solution.Video is the data stream that image encodes sequentially in time successively, when memory space and calculating resource-constrained, extracts the key frame of video to represent video, indexes with key frame, provides realization for content based video retrieval system and be likely to.Thus, key-frame extraction technique is one of key technology realizing content based video retrieval system.
At video field, the fragment being made up of upper continual frame of a series of times is called camera lens.And key frame, it is simply that from original video data, extract static map picture frame, represent the content of camera lens by the mode summarized.Key-frame extraction technique mainly has the requirement of two aspects: one is under the premise removing redundancy, and the key frame of extraction can effective reflecting video content;Two be extract key frame Algorithms T-cbmplexity relatively low.
Traditional key frame approach that extracts has the method based on shot boundary, based on the method for motion and content analysis, based on the method for cluster, and the method based on compression video.Method robustness based on shot boundary is bad, it is easy to be subject to the impact of video image intensity of variation, and the key frame chosen is not necessarily representative.The same robustness of method based on motion and content analysis is not strong, and computation complexity is higher, is not suitable for real-time scene.Method space complexity based on cluster is high, and can not effectively preserve the time sequencing in former camera lens and multidate information, it is adaptable to the video that duration is shorter.Based on compression video method without all decompressing video flowing, computation complexity is relatively low, but does not effectively utilize video image information, and the key frame representativeness of extraction is poor.
Summary of the invention
The invention provides a kind of video key frame extracting method based on shot boundary detector and cluster, characteristics of image by relatively each frame of video, detect shot boundary exactly, extract the key frame of each camera lens efficiently, using N number of key frame higher for weight according to time sequence after as the key frame of video, it is possible to effectively represent whole video.
A kind of video key frame extracting method based on shot boundary detector and cluster, including:
Step 1, reads video, extracts the characteristics of image of every frame video;
Step 2, calculates the characteristics of image difference of every frame video and adjacent former frame video;
Step 3, uses sliding window adaptive method detector lens border;
Step 4, utilizes Cluster Algorithm of Extracting Key Frame to each camera lens;
Step 5, calculates the weight of key frame, and several key frames that weight selection is higher, according to time sequence, as key frame of video.
As preferably, step 1 comprises the steps:
Step 1-1, by every frame video conversion to hsv color space, be then divided into some image blocks;
Step 1-2, calculate each image block histogram feature in hsv color space;
Step 1-3, the histogram feature of each image block is spliced after, normalization obtains the characteristics of image of every frame video.
In the present invention, every frame video being divided into 5 image blocks, centre is an ellipse occupying length and width each 75%, and remainder is divided equally by four corners.During the splicing of each image block, the weight of mid portion is 2 times of corner parts weight.
As preferably, in step 2, calculating the characteristics of image difference of every frame video and adjacent former frame video according to following formula:
difft=| | Ft-Ft-1||2
In formula: FtRepresent the characteristics of image of t frame;Ft-1Represent the characteristics of image of t-1 frame;difftRepresent the characteristics of image difference of t frame and t-1 frame.
As preferably, in step 3, sliding window describes the video sequence of current lens for preserving, and the first frame of sliding window is the beginning of a camera lens, and last frame is the end of a camera lens.S is made to represent the sliding window of video sequence,Represent the meansigma methods of the characteristics of image difference of all picture frames, diff in sliding windowcurrRepresent the characteristics of image difference of present frame and former frame;Set two threshold alpha and β, wherein β > α;
If diff c u r r ≥ α × diff s ‾ And diff c u r r ≤ β × diff s ‾ , Present frame is added in sliding window;
IfPresent frame is decided to be shot boundary, empties sliding window;
If diff c u r r < &alpha; &times; diff s &OverBar; , Ignore present frame.
As preferably, in step 4, utilizing K-means clustering algorithm that the characteristics of image of all frame of video in each camera lens is clustered, parameter K is calculated by following formula and obtains:
K=shot_num × ratio
In formula, shot_num represents the frame number of a camera lens, and ratio represents the compression ratio being manually set.
As preferably, in step 5, utilizing following formula to calculate weight:
w = c l a s s _ n u m t o t o l _ n u m
In formula, w represents the weight of key frame, and class_num represents the totalframes of key frame place class in cluster result, and total_num represents video totalframes.
The present invention has the advantage that based on the video key frame extracting method of shot boundary detector and cluster
(a) high performance single-frame images feature.Every frame video being divided into image block, calculates each image block histogram feature in hsv color space, weighting is stitched together the feature as single-frame images.This be characterized by order to compare the neighbor frame difference opposite sex design, it is possible to rapid extraction.
B shot boundary detection algorithms that () is leading.Design carrys out simulating lens for the sliding window preserving the video sequence describing current lens, sets dual threshold and updates camera lens and detector lens border, improves the accuracy of shot boundary detector, and detection speed is quickly.
Accurately choosing of (c) camera lens key frame.The characteristics of image of all frames in one camera lens is carried out K-mean cluster, determines K by compression ratio, fast and effeciently select most representational key frame.
The filtration of (d) key frame of video.Key frame for selecting gives weight, filters out the key frame that weight is relatively low, and remaining key frame can effectively represent whole video.
E () has portable preferably and wide applicability, it is possible to process the video of random length.
Accompanying drawing explanation
Fig. 1 is the present invention flow chart based on shot boundary detector and the video key frame extracting method of cluster;
Fig. 2 is that the present invention is based on the flow chart of image characteristics extraction algorithm in the video key frame extracting method of shot boundary detector and cluster;
Fig. 3 is that the present invention is based on the flow chart of shot boundary detection algorithms in the video key frame extracting method of shot boundary detector and cluster.
Detailed description of the invention
Below in conjunction with accompanying drawing and example, the invention will be described further.
Video key frame extracting method based on shot boundary detector and cluster provided by the invention, carries out system realization on linux system, and flow process is as it is shown in figure 1, comprise the steps:
(1) read video frame images, extract the characteristics of image of every two field picture.
Step (1) implement flow process as in figure 2 it is shown, comprise the steps:
(1.1) read video frame images, and image is transformed into hsv color space.
(1.2) video frame images obtained based on step (1.1), is divided into 5 parts according to visual feature by each video frame images, and centre is an ellipse occupying 75% length and width, and remainder is divided equally according to 4 corners.
(1.3) the piecemeal result obtained according to step (1.2), calculates each image block histogram feature in hsv color space respectively, according to human eye sensitivity, the quantization progression of 3 passages in hsv color space is respectively set as 16,24 and 6.
(1.4) the block histogram feature of the image block obtained according to step (1.3), stitchs together the histogram feature of all image blocks, and mid portion weight is 2 times of corner each several part weight, does normalized, as the feature of image.
(2) characteristics of image of the present frame obtained according to step (1), calculates the characteristics of image difference of present frame and previous frame.
In step (2), make FtRepresent the characteristics of image of t frame, Ft-1Represent the characteristics of image of t-1 frame, difftRepresent the characteristics of image difference of t frame and t-1 frame, then calculate difftFormula be difft=| | Ft-Ft-1||2
(3) the characteristics of image difference of the present frame obtained according to step (2) and previous frame, the meansigma methods of picture frame characteristics of image differences all with current sliding window mouth is made comparisons, detector lens border.
Sliding window describes the video sequence of current lens for preserving, and the first frame of sliding window is the beginning of a camera lens, and last frame is the end of a camera lens.
Owing to different camera lenses comprises different number of frame, the frame number that sliding window preserves is also different with camera lens difference.When initializing or after a camera lens being detected, sliding window can be cleared, and next frame is stored in sliding window, it is meant that encounters a new camera lens, is successively read follow-up frame and judges whether to belong to current lens and the need of being stored in sliding window.The example of one sliding window renewal is as shown in table 1, and the video of 200 frames is divided into 4 camera lenses.
Table 1
Frame 1-25 26-75 76-150 151-200
Sliding window 1 2 3 4
S is made to represent the sliding window of video sequence,Represent the meansigma methods of all picture frame feature differences, diff in sliding windowcurrRepresent the characteristics of image difference of present frame and previous frame.Set two threshold alpha and β (β > α), in our experience, the span of α in the span of 0.1-0.4, β at 1.0-3.0.For different videos, the optimal value of two threshold values is often different, thus, in actual applications, it should a collection of video of sampling in video library debugs two threshold values so that accuracy rate and the recall rate of shot boundary detector reach requirement.
As it is shown on figure 3, step (3) can be subdivided into:
(3.1) if diff c u r r &GreaterEqual; &alpha; &times; diff s &OverBar; And diff c u r r &le; &beta; &times; diff s &OverBar; , Show the characteristics of image difference of present frame and previous frame in allowed limits, belong to current lens, present frame is added in sliding window;
(3.2) ifShow the characteristics of image difference of present frame and previous frame, excessive compared with the meansigma methods of all picture frame feature differences deposited in sliding window, it is not belonging to current lens, then present frame is decided to be shot boundary, empty sliding window;
(3.3) ifShow that present frame is too small with the characteristics of image difference of previous frame, it is clear that present frame belongs to current lens, but we ignore present frame.Because if present frame is joined sliding window, can makeSignificantly diminish.Considering this situation, duration is the camera lens of 3 seconds, and wherein in 1 second, image is about the same, if all being joined in sliding window by the image in that 1 second, it is likely that the border of camera lens can be made to produce erroneous judgement.
(4) shot boundary obtained according to step (3), adjacent two shot boundaries determine a camera lens, extract the key frame in each camera lens by K-mean algorithm.
Described step (4) can be subdivided into:
(4.1) the parameter K of K-mean cluster is determined.Making shot_num represent the frame number of a camera lens, ratio represents the compression ratio being manually set, then K=shot_num × ratio.
(4.2) the parameter K obtained according to (4.1), clusters the characteristics of image of all frames of each camera lens with K-means clustering algorithm, and cluster produces K class, using the frame nearest from class center for each apoplexy due to endogenous wind key frame as such generation.
(5) the camera lens key frame obtained according to step (4), calculates key frame weight, higher N number of of weight selection, and according to time sequence.
Described step (5) can be subdivided into:
(5.1) according to cluster result, giving the corresponding weight of key frame, make w represent the weight of key frame, class_num represents the totalframes of key frame place class in cluster result, and total_num represents video totalframes, then the expression formula of w is
(5.2) N number of key frame that selected weight is higher, according to time sequence, as the key frame of video, is saved in local file system by image, as video index, the feature of key frame is saved in data base.
The inventive method relates to high performance single-frame images feature and leading shot boundary detection algorithms, filtered by K-means clustering algorithm and weight, the key frame that can represent video can be extracted fast and accurately, there is portable preferably and wide applicability.
In order to prove the effectiveness of the method for the invention, CCWEBVIDEO data base is the contrast experiment of video frequency searching.CCWEBVIDEO is a data base comprising a lot of repetition or similar video, and it includes 24 popular terms at YouTube, GoogleVideo and Yahoo!Retrieval result on Video, the quantity of average redundant video accounts for 27% in retrieval result.
This experiment have chosen ID and 8 tests with the two of 9 video libraries that query word is corresponding, and the database information that two query words are corresponding is as shown in table 2.
Table 2
This experiment mainly includes following steps:
(1) key frame is gone out for video extraction in all storehouses;
(2) characteristics of image for all key frames indexes;
(3) one video of input, goes out key frame for this video extraction;
(4) by 2000 neighbour's frame of video of characteristic key of each key frame;
(5) neighbour's frame of video of same video is merged, obtain the weight of similar video;
(6) weight of the identical similar video that all key frames of superposition retrieve, returns 30 videos by weight summation order from big to small.
Step (1) and (2) are the preparation process of video frequency searching, are that off-line completes;Step (3) to (6) is the step of the similar video of one video of retrieval, completes online.In step (3), the video of input is chosen to be first video of the retrieval result returned according to term.
In step (1) and (3), this experiment compares two kinds of methods, and a kind of is the extraction method of key frame of the present invention, and another kind is the method for extraction 1 frame per second.Result is as shown in table 3, it can be seen that compared to the method for extraction 1 frame per second, the number of video frames that the inventive method is extracted is about the 1/8 of the method from result, significantly reduces index memory space, time and video frequency searching time set up in index.Meanwhile, similar video retrieval accuracy rate and recall rate, there is no obvious reduction.
Table 3
The foregoing is only the preferred embodiment of the present invention, protection scope of the present invention is not limited in above-mentioned embodiment, and every technical scheme belonging to the principle of the invention belongs to protection scope of the present invention.For a person skilled in the art, the some improvements and modifications carried out under the premise without departing from principles of the invention, these improvements and modifications also should be regarded as protection scope of the present invention.

Claims (6)

1. the video key frame extracting method based on shot boundary detector and cluster, it is characterised in that including:
Step 1, reads video, extracts the characteristics of image of every frame video;
Step 2, calculates the characteristics of image difference of every frame video and adjacent former frame video;
Step 3, uses sliding window adaptive method detector lens border;
Step 4, utilizes Cluster Algorithm of Extracting Key Frame to each camera lens;
Step 5, calculates the weight of key frame, and several key frames that weight selection is higher, according to time sequence, as key frame of video.
2. the video key frame extracting method based on shot boundary detector and cluster as claimed in claim 1, it is characterised in that step 1 comprises the steps:
Step 1-1, by every frame video conversion to hsv color space, be then divided into some image blocks;
Step 1-2, calculate each image block histogram feature in hsv color space;
Step 1-3, the histogram feature of each image block is spliced after, normalization obtains the characteristics of image of every frame video.
3. the video key frame extracting method based on shot boundary detector and cluster as claimed in claim 1, it is characterised in that in step 2, calculate the characteristics of image difference of every frame video and adjacent former frame video according to following formula:
difft=| | Ft-Ft-1||2
In formula: FtRepresent the characteristics of image of t frame;Ft-1Represent the characteristics of image of t-1 frame;difftRepresent the characteristics of image difference of t frame and t-1 frame.
4. the video key frame extracting method based on shot boundary detector and cluster as claimed in claim 1, it is characterised in that in step 3, make S represent the sliding window of video sequence,Represent the meansigma methods of the characteristics of image difference of all picture frames, diff in sliding windowcurrRepresent the characteristics of image difference of present frame and former frame;Set two threshold alpha and β, wherein β > α;
IfAndPresent frame is added in sliding window;
IfPresent frame is decided to be shot boundary, empties sliding window;
IfIgnore present frame.
5. the video key frame extracting method based on shot boundary detector and cluster as claimed in claim 1, it is characterized in that, in step 4, utilizing K-means clustering algorithm that the characteristics of image of all frame of video in each camera lens is clustered, parameter K is calculated by following formula and obtains:
K=shot_num × ratio
In formula, shot_num represents the frame number of a camera lens, and ratio represents the compression ratio being manually set.
6. the video key frame extracting method based on shot boundary detector and cluster as claimed in claim 1, it is characterised in that in step 5, utilizes following formula to calculate weight:
w = c l a s s _ n u m t o t o l _ n u m
In formula, w represents the weight of key frame, and class_num represents the totalframes of key frame place class in cluster result, and total_num represents video totalframes.
CN201610093299.9A 2016-02-19 2016-02-19 Video key frame extraction method based on shot boundary detection and clustering Pending CN105761263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610093299.9A CN105761263A (en) 2016-02-19 2016-02-19 Video key frame extraction method based on shot boundary detection and clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610093299.9A CN105761263A (en) 2016-02-19 2016-02-19 Video key frame extraction method based on shot boundary detection and clustering

Publications (1)

Publication Number Publication Date
CN105761263A true CN105761263A (en) 2016-07-13

Family

ID=56330180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610093299.9A Pending CN105761263A (en) 2016-02-19 2016-02-19 Video key frame extraction method based on shot boundary detection and clustering

Country Status (1)

Country Link
CN (1) CN105761263A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303756A (en) * 2016-10-10 2017-01-04 中国农业大学 A kind of method and device for video copyright protecting
CN106851437A (en) * 2017-01-17 2017-06-13 南通同洲电子有限责任公司 A kind of method for extracting video frequency abstract
CN107832694A (en) * 2017-10-31 2018-03-23 北京赛思信安技术股份有限公司 A kind of key frame of video extraction algorithm
CN108470077A (en) * 2018-05-28 2018-08-31 广东工业大学 A kind of video key frame extracting method, system and equipment and storage medium
CN108875931A (en) * 2017-12-06 2018-11-23 北京旷视科技有限公司 Neural metwork training and image processing method, device, system
WO2018228130A1 (en) * 2017-06-15 2018-12-20 腾讯科技(深圳)有限公司 Video encoding method, apparatus, device, and storage medium
CN109151501A (en) * 2018-10-09 2019-01-04 北京周同科技有限公司 A kind of video key frame extracting method, device, terminal device and storage medium
CN109635736A (en) * 2018-12-12 2019-04-16 北京搜狐新媒体信息技术有限公司 A kind of video heads figure selection method and system
CN109753884A (en) * 2018-12-14 2019-05-14 重庆邮电大学 A kind of video behavior recognition methods based on key-frame extraction
CN109831680A (en) * 2019-03-18 2019-05-31 北京奇艺世纪科技有限公司 A kind of evaluation method and device of video definition
CN110147469A (en) * 2019-05-14 2019-08-20 腾讯音乐娱乐科技(深圳)有限公司 A kind of data processing method, equipment and storage medium
CN110458141A (en) * 2019-08-20 2019-11-15 北京深演智能科技股份有限公司 A kind of extracting method of key frame of video, apparatus and system
CN110472484A (en) * 2019-07-02 2019-11-19 山东师范大学 Video key frame extracting method, system and equipment based on multiple view feature
WO2020020241A1 (en) * 2018-07-27 2020-01-30 北京京东尚科信息技术有限公司 Video processing method and apparatus
CN110795599A (en) * 2019-10-18 2020-02-14 山东师范大学 Video emergency monitoring method and system based on multi-scale graph
CN110852289A (en) * 2019-11-16 2020-02-28 公安部交通管理科学研究所 Method for extracting information of vehicle and driver based on mobile video
CN111510792A (en) * 2020-05-22 2020-08-07 山东师范大学 Video abstract generation method and system based on adaptive weighted graph difference analysis
CN112511854A (en) * 2020-11-27 2021-03-16 刘亚虹 Live video highlight generation method, device, medium and equipment
CN112579823A (en) * 2020-12-28 2021-03-30 山东师范大学 Video abstract generation method and system based on feature fusion and incremental sliding window
CN112954450A (en) * 2021-02-02 2021-06-11 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN113038142A (en) * 2021-03-25 2021-06-25 北京金山云网络技术有限公司 Video data screening method and device and electronic equipment
CN113112519A (en) * 2021-04-23 2021-07-13 电子科技大学 Key frame screening method based on interested target distribution
WO2022078363A1 (en) * 2020-10-13 2022-04-21 北京沃东天骏信息技术有限公司 Video preview content generation method and apparatus, computer apparatus and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1515276A2 (en) * 2003-09-12 2005-03-16 Hewlett-Packard Development Company, L.P. Generating animated image file from video data file frames
CN102254006A (en) * 2011-07-15 2011-11-23 上海交通大学 Method for retrieving Internet video based on contents
CN103065153A (en) * 2012-12-17 2013-04-24 西南科技大学 Video key frame extraction method based on color quantization and clusters
CN103150373A (en) * 2013-03-08 2013-06-12 北京理工大学 Generation method of high-satisfaction video summary

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1515276A2 (en) * 2003-09-12 2005-03-16 Hewlett-Packard Development Company, L.P. Generating animated image file from video data file frames
CN102254006A (en) * 2011-07-15 2011-11-23 上海交通大学 Method for retrieving Internet video based on contents
CN103065153A (en) * 2012-12-17 2013-04-24 西南科技大学 Video key frame extraction method based on color quantization and clusters
CN103150373A (en) * 2013-03-08 2013-06-12 北京理工大学 Generation method of high-satisfaction video summary

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIE-LING LAI,YANGYI: "Key frame extraction based on Frame", 《24TH CHINESE CONTROL AND DECISION CONFERENCE》 *
傅泽田等: "《面向移动终端的农业信息智能获取》", 30 September 2015, 北京:中国农业大学出版社 *
洪小娇等: "基于拉普拉斯分值特征选择的运动捕获数据关键帧提取", 《计算机工程与科学》 *
苏新宁: "《信息检索理论与技术》", 30 September 2004, 北京:科学技术文献出版社 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303756A (en) * 2016-10-10 2017-01-04 中国农业大学 A kind of method and device for video copyright protecting
CN106851437A (en) * 2017-01-17 2017-06-13 南通同洲电子有限责任公司 A kind of method for extracting video frequency abstract
WO2018228130A1 (en) * 2017-06-15 2018-12-20 腾讯科技(深圳)有限公司 Video encoding method, apparatus, device, and storage medium
US11297328B2 (en) 2017-06-15 2022-04-05 Tencent Technology (Shenzhen) Company Ltd Video coding method, device, device and storage medium
US10893275B2 (en) 2017-06-15 2021-01-12 Tencent Technology (Shenzhen) Company Ltd Video coding method, device, device and storage medium
CN107832694B (en) * 2017-10-31 2021-01-12 北京赛思信安技术股份有限公司 Video key frame extraction method
CN107832694A (en) * 2017-10-31 2018-03-23 北京赛思信安技术股份有限公司 A kind of key frame of video extraction algorithm
CN108875931A (en) * 2017-12-06 2018-11-23 北京旷视科技有限公司 Neural metwork training and image processing method, device, system
CN108875931B (en) * 2017-12-06 2022-06-21 北京旷视科技有限公司 Neural network training and image processing method, device and system
CN108470077B (en) * 2018-05-28 2023-07-28 广东工业大学 Video key frame extraction method, system and device and storage medium
CN108470077A (en) * 2018-05-28 2018-08-31 广东工业大学 A kind of video key frame extracting method, system and equipment and storage medium
US11445272B2 (en) 2018-07-27 2022-09-13 Beijing Jingdong Shangke Information Technology Co, Ltd. Video processing method and apparatus
EP3826312A4 (en) * 2018-07-27 2022-04-27 Beijing Jingdong Shangke Information Technology Co., Ltd. Video processing method and apparatus
WO2020020241A1 (en) * 2018-07-27 2020-01-30 北京京东尚科信息技术有限公司 Video processing method and apparatus
CN110769279A (en) * 2018-07-27 2020-02-07 北京京东尚科信息技术有限公司 Video processing method and device
CN109151501B (en) * 2018-10-09 2021-06-08 北京周同科技有限公司 Video key frame extraction method and device, terminal equipment and storage medium
CN109151501A (en) * 2018-10-09 2019-01-04 北京周同科技有限公司 A kind of video key frame extracting method, device, terminal device and storage medium
CN109635736A (en) * 2018-12-12 2019-04-16 北京搜狐新媒体信息技术有限公司 A kind of video heads figure selection method and system
CN109753884A (en) * 2018-12-14 2019-05-14 重庆邮电大学 A kind of video behavior recognition methods based on key-frame extraction
CN109831680A (en) * 2019-03-18 2019-05-31 北京奇艺世纪科技有限公司 A kind of evaluation method and device of video definition
CN110147469B (en) * 2019-05-14 2023-08-08 腾讯音乐娱乐科技(深圳)有限公司 Data processing method, device and storage medium
CN110147469A (en) * 2019-05-14 2019-08-20 腾讯音乐娱乐科技(深圳)有限公司 A kind of data processing method, equipment and storage medium
CN110472484B (en) * 2019-07-02 2021-11-09 山东师范大学 Method, system and equipment for extracting video key frame based on multi-view characteristics
CN110472484A (en) * 2019-07-02 2019-11-19 山东师范大学 Video key frame extracting method, system and equipment based on multiple view feature
CN110458141A (en) * 2019-08-20 2019-11-15 北京深演智能科技股份有限公司 A kind of extracting method of key frame of video, apparatus and system
CN110795599B (en) * 2019-10-18 2022-04-15 山东师范大学 Video emergency monitoring method and system based on multi-scale graph
CN110795599A (en) * 2019-10-18 2020-02-14 山东师范大学 Video emergency monitoring method and system based on multi-scale graph
CN110852289A (en) * 2019-11-16 2020-02-28 公安部交通管理科学研究所 Method for extracting information of vehicle and driver based on mobile video
CN111510792B (en) * 2020-05-22 2022-04-15 山东师范大学 Video abstract generation method and system based on adaptive weighted graph difference analysis
CN111510792A (en) * 2020-05-22 2020-08-07 山东师范大学 Video abstract generation method and system based on adaptive weighted graph difference analysis
WO2022078363A1 (en) * 2020-10-13 2022-04-21 北京沃东天骏信息技术有限公司 Video preview content generation method and apparatus, computer apparatus and storage medium
CN112511854A (en) * 2020-11-27 2021-03-16 刘亚虹 Live video highlight generation method, device, medium and equipment
CN112579823A (en) * 2020-12-28 2021-03-30 山东师范大学 Video abstract generation method and system based on feature fusion and incremental sliding window
CN112579823B (en) * 2020-12-28 2022-06-24 山东师范大学 Video abstract generation method and system based on feature fusion and incremental sliding window
CN112954450A (en) * 2021-02-02 2021-06-11 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN113038142B (en) * 2021-03-25 2022-11-01 北京金山云网络技术有限公司 Video data screening method and device and electronic equipment
CN113038142A (en) * 2021-03-25 2021-06-25 北京金山云网络技术有限公司 Video data screening method and device and electronic equipment
CN113112519A (en) * 2021-04-23 2021-07-13 电子科技大学 Key frame screening method based on interested target distribution

Similar Documents

Publication Publication Date Title
CN105761263A (en) Video key frame extraction method based on shot boundary detection and clustering
CN108830855B (en) Full convolution network semantic segmentation method based on multi-scale low-level feature fusion
US20210012094A1 (en) Two-stage person searching method combining face and appearance features
US8548256B2 (en) Method for fast scene matching
US8467611B2 (en) Video key-frame extraction using bi-level sparsity
CN113065474B (en) Behavior recognition method and device and computer equipment
US20120148149A1 (en) Video key frame extraction using sparse representation
KR101548438B1 (en) Method and apparatus for comparing videos
CN112580523A (en) Behavior recognition method, behavior recognition device, behavior recognition equipment and storage medium
CN107169106B (en) Video retrieval method, device, storage medium and processor
CN111079539B (en) Video abnormal behavior detection method based on abnormal tracking
JP2003016448A (en) Event clustering of images using foreground/background segmentation
EP3239896B1 (en) Data structure for describing an image sequence, and methods for extracting and matching these data structures
EP2270749A2 (en) Methods of representing images
CA2753978A1 (en) Clustering videos by location
CN111428589B (en) Gradual transition identification method and system
CN111460961A (en) CDVS-based similarity graph clustering static video summarization method
Mahmoud et al. Unsupervised video summarization via dynamic modeling-based hierarchical clustering
CN114782997A (en) Pedestrian re-identification method and system based on multi-loss attention adaptive network
CN113886632B (en) Video retrieval matching method based on dynamic programming
WO2017202086A1 (en) Image screening method and device
CN109002808B (en) Human behavior recognition method and system
CN111191587B (en) Pedestrian re-identification method and system
CN105989063A (en) Video retrieval method and device
CN111832351A (en) Event detection method and device and computer equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160713