CN100555329C - Based on multi-scale wavelet transform video foreground moving Object Segmentation method - Google Patents
Based on multi-scale wavelet transform video foreground moving Object Segmentation method Download PDFInfo
- Publication number
- CN100555329C CN100555329C CNB2008100323905A CN200810032390A CN100555329C CN 100555329 C CN100555329 C CN 100555329C CN B2008100323905 A CNB2008100323905 A CN B2008100323905A CN 200810032390 A CN200810032390 A CN 200810032390A CN 100555329 C CN100555329 C CN 100555329C
- Authority
- CN
- China
- Prior art keywords
- image
- scale wavelet
- video
- moving object
- difference
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Abstract
The present invention relates to a kind of Video Motion Objects Segmentation method.It is all to have local characteristic according to wavelet transformation in time domain and frequency domain, suppresses to disturb, and extracts local message; Based on the background difference, utilize the multi-scale wavelet feature, difference image is carried out multi-scale wavelet transform; Have high changes in amplitude according to the sport foreground object, extract foreground moving object information in the video image.The inventive method does not need relevant scene learning training, manual proofread and correct or artificially judge and information such as hypothesis, can realize Video Motion Objects Segmentation under multiple condition.
Description
Technical field
The present invention relates to a kind of video motion foreground object dividing method, be used for video digital images analysis and target and extract.Belong to the intelligent information processing technology field.
Background technology
The video motion foreground object is cut apart, and is in given two-dimentional consecutive image sequence, finds the sport foreground target, and be partitioned into moving target from video scene.The video motion foreground object is cut apart accurately, is the basis that object tracking, classification, identification and kinematic parameter extract.In fields such as video monitor, self-navigation, multimedia standardization application and pattern-recognitions, has important Practical significance and value.
Though human eye can identify the video motion foreground object at an easy rate, for computing machine, the full-automatic video that is suitable for the generic video sequence is cut apart and is extracted, and is still a difficult problem at present.At first, video motion foreground object itself is varied, lacks unique definition.Secondly, same video scene, different application, interested object video difference.
Existing video motion foreground object dividing method mainly contains: based on background modeling, based on frame-to-frame differences with based on the dividing method of optical flow field.Based on background modeling method,, need background estimating and renewal to the environmental change sensitivity.Based on the inter-frame difference segmentation method, dynamic scene is changed sensitivity, be difficult for moving target is partitioned into fully, easily produce hole in target internal.Based on the optical flow field method, calculated amount is big, and the computing complexity needs specific hardware supported to realize real-time processing.
Summary of the invention
Purpose of the present invention, being needs the scene learning training, manual proofread and correct or artificially judge and information such as a priori assumption at existing video foreground moving Object Segmentation method, and to dynamic scene change responsive, noise greatly, computing is complicated, provide a kind of based on multi-scale wavelet transform video foreground moving Object Segmentation method, can be under multiple condition, realization video motion foreground object is cut apart.
For realizing such purpose, design of the present invention is: (x is y) at yardstick 2 according to two-dimensional image I
jWith the wavelet transformation on the k direction:
Then at x, the wavelet function on the y direction can be expressed as:
In the formula, (x y) is the smothing filtering function to θ.
Can determine thus image I (x, y) through function # (x, y) behind the smothing filtering, the wavelet transformation under different scale is:
If gradient amplitude
Reach local maximum along following gradient direction, then in the image this point (x y) is the multi-scale edge point
In view of the above, can determine marginal point under the different scale.Because noise to the dimensional variation sensitivity, therefore, adopts the above-mentioned local amplitude maximum value of seeking, and can not effectively suppress noise.For effectively overcoming this influence, be higher than certain threshold method by seeking gradient amplitude, substitute and seek local amplitude maximum value, determine the marginal point of different scale.
Wherein, h, v are respectively the filter operator on level, the vertical direction, and T is a threshold value,
Be convolution operator.
According to above-mentioned inventive concept, the present invention adopts following technical proposals:
A kind of based on multi-scale wavelet transform video foreground moving Object Segmentation method, it is characterized in that all having local characteristic in time domain and frequency domain according to wavelet transformation, suppress to disturb, extract local message; Based on the background difference, utilize the multi-scale wavelet feature, difference image is carried out multi-scale wavelet transform; Have high changes in amplitude according to the sport foreground object, extract foreground moving object information in the video image; Concrete steps are as follows:
1. background difference: current frame image I
1(x is y) with background image T
2(x y) subtracts each other, obtain difference image D (x, y):
D(x,y)=I
1(x,y)-I
2(x,y);
2. difference image multi-scale wavelet transformation:
Wherein, D is a difference image, and h, v are respectively the filter operator on level, the vertical direction,
Be convolution;
3. determining of foreground moving object zone: determine the threshold value T of difference image multi-scale wavelet transformation E, E value is higher than the zone of all pixels compositions of T, be defined as video motion foreground object zone.
The present invention has following conspicuous outstanding substantive distinguishing features and remarkable advantage compared with prior art:
The inventive method computing is easy, flexible, realize easily, having solved in the digital video image sport foreground Object Segmentation needs relevant scene learning training, proofreaies and correct or artificially judge and information such as hypothesis by hand, improve the robustness that the video motion foreground object is cut apart, can adapt to the Video Motion Objects Segmentation under the multiple condition.
Description of drawings
Fig. 1 is the video original background image of one embodiment of the invention.
Fig. 2 is the original current frame image of the video of one embodiment of the invention.
Fig. 3 is the two-value sport foreground area image that is partitioned in Fig. 2 example.
Fig. 4 is the sport foreground area image that is partitioned in Fig. 2 example.
Embodiment
A specific embodiment of the present invention is: this routine video original background image as shown in Figure 1, current frame image is as shown in Figure 2.Fig. 2 and image shown in Figure 1 are carried out difference, the gained difference image is carried out multi-scale wavelet transform, have obviously high changes in amplitude according to the sport foreground object, carry out the sport foreground subject area and cut apart, concrete steps are as follows:
(1) background difference: current frame image I
1(x is y) with background image I
2(x y) subtracts each other, obtain difference image D (x, y).
D(x,y)=I
1(x,y)-I
2(x,y)
(2) difference image multi-scale wavelet transformation:
Wherein, D is a difference image, and h, v are respectively the filter operator on level, the vertical direction,
Be convolution.
(3) determining of foreground moving object zone: determine the threshold value T of difference image multi-scale wavelet transformation E, E value is higher than the zone of all pixels compositions of T, be defined as the video foreground moving zone.
Fig. 3 is that Fig. 4 is the sport foreground object images that is partitioned into through above-mentioned resulting two-value sport foreground subject area.
Claims (1)
1. one kind based on multi-scale wavelet transform video foreground moving Object Segmentation method, it is characterized in that all having local characteristic according to wavelet transformation in time domain and frequency domain, suppresses to disturb, and extracts local message; Based on the background difference, utilize the multi-scale wavelet feature, difference image is carried out multi-scale wavelet transform; Have high changes in amplitude according to the sport foreground object, extract foreground moving object information in the video image; Concrete steps are as follows:
1) background difference: current frame image I
1(x is y) with background image I
2(x y) subtracts each other, obtain difference image D (x, y):
D(x,y)=I
1(x,y)-I
2(x,y);
2) difference image multi-scale wavelet transformation:
Wherein, D is a difference image, and h, v are respectively the filter operator on level, the vertical direction,
Be convolution;
3) determining of foreground moving object zone: determine the threshold value T of difference image multi-scale wavelet transformation E, E value is higher than the zone of all pixels compositions of T, be defined as the video foreground moving zone.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2008100323905A CN100555329C (en) | 2008-01-08 | 2008-01-08 | Based on multi-scale wavelet transform video foreground moving Object Segmentation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2008100323905A CN100555329C (en) | 2008-01-08 | 2008-01-08 | Based on multi-scale wavelet transform video foreground moving Object Segmentation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101216940A CN101216940A (en) | 2008-07-09 |
CN100555329C true CN100555329C (en) | 2009-10-28 |
Family
ID=39623368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2008100323905A Expired - Fee Related CN100555329C (en) | 2008-01-08 | 2008-01-08 | Based on multi-scale wavelet transform video foreground moving Object Segmentation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100555329C (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102044080B (en) * | 2010-12-16 | 2014-04-23 | 北京航空航天大学 | Mobile object detection method and device |
CN103325259B (en) * | 2013-07-09 | 2015-12-09 | 西安电子科技大学 | A kind of parking offense detection method based on multi-core parallel concurrent |
CN104036250B (en) * | 2014-06-16 | 2017-11-10 | 上海大学 | Video pedestrian detection and tracking |
-
2008
- 2008-01-08 CN CNB2008100323905A patent/CN100555329C/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN101216940A (en) | 2008-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN103077521B (en) | A kind of area-of-interest exacting method for video monitoring | |
CN101807352B (en) | Method for detecting parking stalls on basis of fuzzy pattern recognition | |
CN104463877B (en) | A kind of water front method for registering based on radar image Yu electronic chart information | |
CN101493932B (en) | Watershed texture imaging segmenting method based on morphology Haar small wave texture gradient extraction | |
CN104616274A (en) | Algorithm for fusing multi-focusing image based on salient region extraction | |
CN104063882B (en) | Vehicle video speed measuring method based on binocular camera | |
CN103413276A (en) | Depth enhancing method based on texture distribution characteristics | |
CN104361343A (en) | Method and device for identifying vehicle types | |
CN101179713A (en) | Method of detecting single moving target under complex background | |
CN102930287A (en) | Overlook-based detection and counting system and method for pedestrians | |
CN102842039B (en) | Road image detection method based on Sobel operator | |
CN103514608A (en) | Movement target detection and extraction method based on movement attention fusion model | |
CN106815583A (en) | A kind of vehicle at night license plate locating method being combined based on MSER and SWT | |
CN110321855A (en) | A kind of greasy weather detection prior-warning device | |
CN105139391A (en) | Edge detecting method for traffic image in fog-and-haze weather | |
Meshram et al. | Traffic surveillance by counting and classification of vehicles from video using image processing | |
CN101739667B (en) | Non-downsampling contourlet transformation-based method for enhancing remote sensing image road | |
CN100555329C (en) | Based on multi-scale wavelet transform video foreground moving Object Segmentation method | |
CN103914829A (en) | Method for detecting edge of noisy image | |
CN102136060A (en) | Method for detecting population density | |
CN110473255A (en) | A kind of ship bollard localization method divided based on multi grid | |
CN104063682A (en) | Pedestrian detection method based on edge grading and CENTRIST characteristic | |
CN103413138A (en) | Method for detecting point target in infrared image sequence | |
CN108009480A (en) | A kind of image human body behavioral value method of feature based identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20091028 Termination date: 20120108 |