CN103093467A - Shot boundary detection method based on double detection model - Google Patents
Shot boundary detection method based on double detection model Download PDFInfo
- Publication number
- CN103093467A CN103093467A CN2013100208838A CN201310020883A CN103093467A CN 103093467 A CN103093467 A CN 103093467A CN 2013100208838 A CN2013100208838 A CN 2013100208838A CN 201310020883 A CN201310020883 A CN 201310020883A CN 103093467 A CN103093467 A CN 103093467A
- Authority
- CN
- China
- Prior art keywords
- frame
- shot boundary
- subwindow
- frames
- shot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a shot boundary detection method based on a double detection model. The method includes two stages including initial survey and recheck. Firstly, each video frame of an input video is transformed to a hue, saturation and value (HSV) color space, then a non-even block is utilized to serve as a judgment factor, and a self-adaption binary search mechanism based on a sliding window is combined to conduct the shot boundary initial survey. In the process of the recheck, a speeded up robust feature (SURF) algorithm is utilized to carry out matching on a shot boundary which is obtained by the initial survey, and false detection in the initial survey is eliminated. The shot boundary detection method can significantly improve accuracy of the shot boundary detection.
Description
Technical field
The invention belongs to field of video content analysis, be specifically related to a kind of lens boundary detection method based on the double check model.
Background technology
Along with the development of multimedia technology, the video data that people touch is with unprecedented speed increment, and is therefore how convenient, fast, retrieve needed information exactly in the face of the video information of magnanimity, is the focus that people pay close attention to always.Adopt artificial visual classification method too consuming time and be subject to the impact of human factor, in various methods of video analyses, top priority is that camera lens is cut apart.It is the basis of other methods of video analyses that camera lens is cut apart.On this basis, can carry out further that key frame of video extracts and the research of content-based video retrieval technology.
In the video analysis field, Chinese scholars detects shot boundary and has done a large amount of research work.Mainly comprise following methods: based on pixel, based on histogram, based on edge feature with based on model etc., these algorithms have certain limitation.Such as, calculate simple and be easy to based on the method for pixel and realize, but very responsive to noise and camera lens or object of which movement.Most of algorithm causes accuracy rate not high only for the one-time detection of shot boundary like this.
Summary of the invention
The present invention is directed to the deficiencies in the prior art, this paper provides a kind of Video Shot Detection with double check mechanism in conjunction with non-homogeneous piecemeal, self-adaptation binary chop and acceleration robust feature (SURF) algorithm.
The inventive method specifically comprises the following steps:
(1) use based on the self-adaptation binary chop algorithm on the HSV space and carry out the shot boundary initial survey;
(2) adopt the SURF algorithm that shot boundary is rechecked;
(3) moving window is completed camera lens and is cut apart.
Beneficial effect of the present invention:
First, only cause the not high characteristics of accuracy rate for the one-time detection of shot boundary for most of shot boundary detection algorithms, the present invention has adopted the Video Shot Detection with double check mechanism in conjunction with non-homogeneous piecemeal, self-adaptation binary chop and acceleration robust feature (SURF) algorithm, can effectively improve the accuracy rate of algorithm.
The second, although recheck function, algorithm complex is improved a little, just on checking functions, the present invention has adopted sliding window mechanism can effectively reduce algorithm complex.So in general, the present invention has guaranteed that also the complexity of algorithm is not high on the basis of improving Detection accuracy.
Description of drawings
Fig. 1 is the inventive method process flow diagram;
Fig. 2 is non-homogeneous block diagram;
Fig. 3 is that the SURF algorithm is rechecked process flow diagram.
Embodiment
The invention will be further described below in conjunction with accompanying drawing.
As shown in Figure 1, detector lens boundary method of the present invention comprises the following steps:
1, the video window that reads and decode is equally divided into 2 subwindows with window, and the middle frame belongs to left subwindow and right subwindow simultaneously, respectively is set to 8 frames.
2, calculate difference between initial and end two frames of each subwindow, be designated as respectively
With
Computation process is calculated respectively 3 kinds of piecemeals based on the difference of tone, saturation degree and the brightness of HSV spatial model for by shown in Figure 2, each frame of video is carried out non-homogeneous piecemeal, then calculates the difference based on total tone, saturation degree and the brightness of HSV spatial model.The difference of total tone, saturation degree and brightness is designated as respectively
, computing formula is as follows:
Wherein, m is a minute block number, m=1,2,3; I, j are frame number; W, H are respectively width and the height of frame;
Represent that respectively i, j frame are in the difference of tone, saturation degree and the brightness of m piecemeal;
Weighting coefficient for piecemeal;
With multiply each other the to get frame-to-frame differences of two width pictures of the difference of total tone, saturation degree and brightness, be designated as
Computing formula is as follows:
3, carry out the shot boundary initial survey with self-adaptation binary chop algorithm:
(7)
(8)
In formula,
Parameter for the shot boundary initial survey;
Parameter for the gradual shot detection;
If
With
Satisfy formula (5) or (6), may have the lens mutation border in so corresponding left subwindow or right subwindow, continue corresponding subwindow is carried out binary chop until window size is 2;
If
With
All do not satisfy formula (5), (6), (7), (8), there is not shot boundary in this window, finishes binary chop.
4, as shown in Figure 3, may there be shot boundary if the initial survey result is thought, extracts shot boundary head and the tail two frames SURF unique point separately, remember that its quantity is respectively
With
In formula, T is the unique point parameter; TP is 2 width picture feature points parameters relatively;
If
Or
Value satisfy in formula (9), (10), (11) one, think to have shot boundary between head and the tail two frames, otherwise two frames carried out the SURF Feature Points Matching, obtain the characteristic matching N that counts out, calculate the SURF characteristic matching rate between this two frame:
If less than 8%, there is shot boundary in R between this two frame; Otherwise there is not shot boundary between head and the tail two frames, this camera lens is got rid of from the initial survey result.
If 5 detect shot boundary, with the tail frame of this camera lens first frame as next window; Otherwise the 15th frame of current window is as the first frame of next window; Repeat above-mentioned steps 1,2,3 and 4, until video finishes; The frame number that left subwindow may occur at last is less than right subwindow, left subwindow and right subwindow is only distinguished relatively head and the tail two frame differences; If difference is obvious, think to have shot boundary.Last no more than 8 frames of frame number (namely without right subwindow) that left subwindow also may occur, the difference between head and the tail two frames of a more left subwindow of need; If difference is obvious, think to have shot boundary.
Claims (4)
1. the lens boundary detection method based on the double check model, is characterized in that comprising the steps:
Step (1) is used based on the self-adaptation binary chop algorithm on the HSV space and is carried out the shot boundary initial survey;
Step (2) adopts the SURF algorithm that shot boundary is rechecked;
Step (3) moving window is completed camera lens and is cut apart.
2. a kind of lens boundary detection method based on the double check model according to claim 1, it is characterized in that: the concrete grammar of step (1) is as follows:
A) at first read a video window, video window is divided into left subwindow and right subwindow, respectively be set to 8 frames;
B) be divided into 3 sections so that the ratio of 3:14:3 is long and wide with video simultaneously, wherein 4 angles belong to same piecemeal, and the center belongs to independently piecemeal, and remaining belongs to same piecemeal, so just is divided into 3 bulks;
C) calculating on each piecemeal based on tone, saturation degree and brightness value on the hsv color space;
D) frame-to-frame differences of structure tone, saturation degree and brightness value, computing formula is as follows:
Wherein, m is a minute block number, m=1,2,3; I, j are frame number; W, H are respectively width and the height of frame;
Represent that respectively i, j frame are in the difference of tone, saturation degree and the brightness of m piecemeal;
Weighting coefficient for piecemeal;
With multiply each other the to get frame-to-frame differences of two width pictures of the difference of total tone, saturation degree and brightness, be designated as
Computing formula is as follows:
E) carry out the shot boundary initial survey with self-adaptation binary chop algorithm:
(5)
In formula,
Be the frame-to-frame differences of left subwindow head and the tail two frames,
Frame-to-frame differences for right subwindow head and the tail two frames;
Become the parameter that detects for head process;
Parameter for the gradual shot detection;
If
With
Satisfy formula (5) or (6), may have the lens mutation border in so corresponding left subwindow or right subwindow, continue corresponding subwindow is carried out binary chop until window size is 2;
3. a kind of lens boundary detection method based on the double check model according to claim 1, it is characterized in that: the concrete grammar of step (2) is as follows:
May there be shot boundary if the initial survey result is thought, extracts shot boundary head and the tail two frames SURF unique point separately, remember that its quantity is respectively
With
(10)
(11)
In formula, T is the unique point parameter; TP is 2 width picture feature points parameters relatively;
If
Or
Value satisfy in formula (9), (10), (11) one, think and have shot boundary between head and the tail two frames, otherwise two frames are carried out the SURF Feature Points Matching, obtain the characteristic matching N that counts out, calculate the SURF characteristic matching rate between this two frame:
(12)
If less than 8%, there is shot boundary in R between this two frame; Otherwise there is not shot boundary between head and the tail two frames, this camera lens is got rid of from the initial survey result.
4. a kind of lens boundary detection method based on the double check model according to claim 1, it is characterized in that: the concrete grammar of step (3) is as follows:
If shot boundary detected, with the tail frame of this camera lens first frame as next window; Otherwise the 15th frame of current window is as the first frame of next window; Repeating step (1) and step (2) are until the video end; The frame number that left subwindow may occur at last is less than right subwindow, left subwindow and right subwindow is only distinguished relatively head and the tail two frame differences; If difference is obvious, think to have shot boundary; Last no more than 8 frames of frame number that left subwindow also may occur, namely without right subwindow, the difference between head and the tail two frames of the more left subwindow of need only; If difference is obvious, think to have shot boundary.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013100208838A CN103093467A (en) | 2013-01-21 | 2013-01-21 | Shot boundary detection method based on double detection model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013100208838A CN103093467A (en) | 2013-01-21 | 2013-01-21 | Shot boundary detection method based on double detection model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103093467A true CN103093467A (en) | 2013-05-08 |
Family
ID=48205998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013100208838A Pending CN103093467A (en) | 2013-01-21 | 2013-01-21 | Shot boundary detection method based on double detection model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103093467A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203244A (en) * | 2015-05-08 | 2016-12-07 | 无锡天脉聚源传媒科技有限公司 | A kind of determination method and device of lens type |
CN106412619A (en) * | 2016-09-28 | 2017-02-15 | 江苏亿通高科技股份有限公司 | HSV color histogram and DCT perceptual hash based lens boundary detection method |
CN108268896A (en) * | 2018-01-18 | 2018-07-10 | 天津市国瑞数码安全系统股份有限公司 | The nude picture detection method being combined based on HSV with SURF features |
CN110188625A (en) * | 2019-05-13 | 2019-08-30 | 浙江大学 | A kind of video fine structure method based on multi-feature fusion |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1477600A (en) * | 2003-07-18 | 2004-02-25 | 北京大学计算机科学技术研究所 | Scene-searching method based on contents |
CN102800095A (en) * | 2012-07-17 | 2012-11-28 | 南京特雷多信息科技有限公司 | Lens boundary detection method |
-
2013
- 2013-01-21 CN CN2013100208838A patent/CN103093467A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1477600A (en) * | 2003-07-18 | 2004-02-25 | 北京大学计算机科学技术研究所 | Scene-searching method based on contents |
CN102800095A (en) * | 2012-07-17 | 2012-11-28 | 南京特雷多信息科技有限公司 | Lens boundary detection method |
Non-Patent Citations (3)
Title |
---|
HUA ZHANG ET AL: "《A Shot Boundary Detection Method Based on Color Feature》", 《2011 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY》 * |
MURAT BIRINCI ET AL: "《Video Shot Boundary Detection By Structural Analysis of Local Image Features》", 《12TH INTERNATIONAL WORKSHOP ON IMAGE ANALYSIS FOR MULTIMEDIA INTERACTIVE》 * |
巢娟等: "《基于双重检测模型的视频镜头分割算法》", 《上海交通大学学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203244A (en) * | 2015-05-08 | 2016-12-07 | 无锡天脉聚源传媒科技有限公司 | A kind of determination method and device of lens type |
CN106203244B (en) * | 2015-05-08 | 2019-08-27 | 无锡天脉聚源传媒科技有限公司 | A kind of determination method and device of lens type |
CN106412619A (en) * | 2016-09-28 | 2017-02-15 | 江苏亿通高科技股份有限公司 | HSV color histogram and DCT perceptual hash based lens boundary detection method |
CN108268896A (en) * | 2018-01-18 | 2018-07-10 | 天津市国瑞数码安全系统股份有限公司 | The nude picture detection method being combined based on HSV with SURF features |
CN110188625A (en) * | 2019-05-13 | 2019-08-30 | 浙江大学 | A kind of video fine structure method based on multi-feature fusion |
CN110188625B (en) * | 2019-05-13 | 2021-07-02 | 浙江大学 | Video fine structuring method based on multi-feature fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107563494B (en) | First-view-angle fingertip detection method based on convolutional neural network and heat map | |
CN106412619B (en) | A kind of lens boundary detection method based on hsv color histogram and DCT perceptual hash | |
Deng et al. | Amae: Adaptive motion-agnostic encoder for event-based object classification | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN103778436B (en) | A kind of pedestrian's attitude detecting method based on image procossing | |
CN112288778B (en) | Infrared small target detection method based on multi-frame regression depth network | |
WO2017166597A1 (en) | Cartoon video recognition method and apparatus, and electronic device | |
TWI749364B (en) | Motion detection method and motion detection system | |
CN107066963B (en) | A kind of adaptive people counting method | |
CN102567738B (en) | Rapid detection method for pornographic videos based on Gaussian distribution | |
CN103093467A (en) | Shot boundary detection method based on double detection model | |
CN102693427A (en) | Method and device for forming detector for detecting images | |
Zhang et al. | A fast method of face detection in video images | |
Gong et al. | A method for wheat head detection based on yolov4 | |
CN104008396A (en) | In and out people flow statistical method based on people head color and shape features | |
Liu et al. | Key frame extraction from online video based on improved frame difference optimization | |
CN104410867A (en) | Improved video shot detection method | |
Chen et al. | Fresh tea sprouts detection via image enhancement and fusion SSD | |
CN107358621A (en) | Method for tracing object and device | |
Du et al. | From characteristic response to target edge diffusion: An approach to small infrared target detection | |
Fu et al. | A method of shot-boundary detection based on HSV space | |
Lin et al. | A method of perspective normalization for video images based on map data | |
CN104318207B (en) | A kind of method that shearing lens and gradual shot are judged using rapid robust feature and SVMs | |
Zhou et al. | CSCNet: a shallow single column network for crowd counting | |
CN106530300A (en) | Flame identification algorithm of low-rank analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130508 |