CN102129692A - Method and system for detecting motion target in double threshold scene - Google Patents
Method and system for detecting motion target in double threshold scene Download PDFInfo
- Publication number
- CN102129692A CN102129692A CN2011100790923A CN201110079092A CN102129692A CN 102129692 A CN102129692 A CN 102129692A CN 2011100790923 A CN2011100790923 A CN 2011100790923A CN 201110079092 A CN201110079092 A CN 201110079092A CN 102129692 A CN102129692 A CN 102129692A
- Authority
- CN
- China
- Prior art keywords
- threshold
- value
- video
- target detection
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention discloses a method and a system for detecting a motion target based on a video image scene and relates to a series of methods and systems for extracting a video background, detecting the motion target and dividing the motion target. The system comprises a video sampling module, a scene target detection module, a network switch/router and a display device. The invention has the advantages that the provided method and system for detecting and tracking the double threshold target have better effect and can be conveniently applied to a practical system.
Description
Technical field
The present invention relates to the invention belongs to Video Detection and civil aviaton's scene monitoring technical field, specifically,, especially a kind of scene moving target detecting method and system, serial of methods and system such as relate to extraction, the moving object detection of video background and cut apart based on video image.
Background technology
The scene target following is the gordian technique in airport scene monitoring field, and wherein the scene moving target method for monitoring based on video image is the important techniques direction in this field.After finishing background extracting, choosing of segmentation threshold value directly has influence on motion target detection and tracking effect.When adopting single fixed gate limit value or adaptive threshold value,, be difficult to the full distinguished of realization prospect and background, the detection and tracking effect instability of foreground target because video image is subjected to scene content and imaging The noise.For example, when threshold value is low the phenomenon that noise floods foreground target can appear, motion target detection and tracking complete failure; When threshold value was higher, noise can well be eliminated, but also may correct foreground target also be suppressed, especially less moving target on the scene.
In the existing ripe target detection and tracking technique, the methods such as common employing fixed gate limit value or adaptive threshold value of choosing of threshold value.Fixed gate limit value method has the characteristics such as simple, that computing velocity is fast that realize, is fit to simple scene.But when run into complicated scene and illumination variation, target and background color near the time often detect failure, this method adaptivity is relatively poor.Adaptive threshold value method is chosen the difference of considering background and prospect on the strategy at thresholding, chooses the gray scale of certain ratio according to gray-scale statistical information and carries out cutting apart of background and foreground target as threshold value.During this method is commonly used in the target detection of analyzing based on contrast and follows the tracks of, good effect is arranged for the detection and tracking of target under the sky background, but unsatisfactory for the ground scene of complexity.
Summary of the invention
The objective of the invention is in order to solve accuracy, the stability problem of moving object detection and tracking under airdrome scene complex background and the illumination condition, a kind of effect double threshold target detection and tracking and system are preferably provided, in the real system that can use easily.
For solving the problems of the technologies described above, double threshold scene target detection tracking method of the present invention adopts following steps to realize:
1. extract the frame of video luminance component, and luminance component is carried out background initialization and renewal;
2. extract the luminance component of present frame and carry out difference with background, the grey scale difference image that obtains is designated as GrayDiff;
3. add up the average of grey scale difference image GrayDiff
EAnd standard variance
σ
4. according to the result of calculation in the 3rd step, set low threshold and wealthy family's limit value respectively:
5. grey scale difference image GrayDiff is carried out the coarse segmentation result that binaryzation obtains moving target by low threshold and wealthy family's limit value respectively;
6. the image after the binaryzation is corroded with expansive working and remove noise;
7. add up elemental area in the high threshold binary image
AGreater than A
TH All Ranges, and be designated as A
Hi
8. add up each A in the high threshold binary image
Hi Elemental area in the corresponding low threshold binary image zone
AGreater than A
TL All Ranges, and be designated as A
Li
9. the A to being labeled as in the low threshold binary image
l The zone is demarcated and as the coordinate of scene target.
In described step 1, being meant in advance of described luminance component is transformed into YUV or YCbCr space, sub-department's luminance component Y then with input video.
In described step 2, described difference takes absolute value after being meant difference again, and promptly each data is to be the resulting result that takes absolute value again after the difference by the pixel value of relevant position in each pixel value in the present frame and the initial background among the grey scale difference image GrayDiff.
In the described step 3, described average
EBe meant the mean value of all pixels among the gray level image GrayDiff, described standard variance
σBe meant each pixel and average among the gray level image GrayDiff
EThe squared difference root sum square.
In the described step 4, described low threshold
T L And high threshold
T H Statistics characteristics according to data among the grey scale difference image GrayDiff are determined, the value that is double threshold is can be arbitrarily, the value of low threshold is chosen the statistical variance that average adds 4 times in the preferred version, and the value variance item coefficient of high threshold is more than 2 times of low threshold coefficient of variation.
In the described step 5, described binaryzation is meant that by low threshold and wealthy family's limit value the pixel that is higher than threshold value among the grey scale difference image GrayDiff being put 1(corresponding grey scale value respectively is 255), it is 0 that the pixel that is lower than threshold value is put 0(corresponding grey scale value).After low threshold and high threshold binary conversion treatment, obtain the binary image of low threshold and high threshold correspondence respectively.
In the described step 6, the 3x3 template is all adopted in corrosion and expansive working.
In the described step 7, in the high threshold binary image, adopt the Canny operator to carry out cutting apart of each zone, reservation and record area are greater than A
TH The zone, A wherein
TH Value be 80 pixels, need to prove regional A
Hi Quantitatively not unique.
In the described step 8, regional A in the low threshold binary image
Li Be regional A by obtaining in the statistic procedure 7
Hi The respective pixel area is determined in the low threshold binary image, wherein A
TL Value be 350 pixels, need to prove regional A
Li Quantitatively not unique.
In the described step 9, region labeling is meant and calculates inclusion region A
l The upper left corner and the lower right corner coordinate figure of rectangular area, and with this coordinate figure as corresponding fields appearance target coordinate.
As Fig. 2, native system has video acquisition unit (containing local video collecting unit and long-distance video collecting unit), scene module of target detection, network switch/router, Video Decoder, video storage server and DVR, video acquisition unit control module and display unit to constitute.Video acquisition module comprises local video collecting device and long-distance video collecting device, specifically refers to local video camera and remote camera.The local video collecting device is connected with Video Decoder by switch/router in the LAN (Local Area Network) mode, and long-distance video employing equipment is connected with Video Decoder by internet or dedicated network.Video Decoder is finished the decoding to the Local or Remote video code flow, and decoded data is sent to the video preprocessor processing module, video code flow is duplicated to video storage server and DVR equipment simultaneously.Video storage server and DVR are responsible for the record of video code flow, use for the record playback.The video preprocessor processing module comprises that frame of video extracts, the pre-service (comprising image noise reduction, image quality enhancing, resolution adjustment, picture cutting etc.) and the frame of video are deposited control.The video preprocessor processing module is responsible for sending pending data to scene moving object detection module.Scene moving object detection module adopts double threshold scene moving target detecting method to realize the detection and the coordinate of target are demarcated to the content of video.The testing result of scene moving object detection module directly sends to display unit.Display unit is responsible for the testing result of scene moving object detection module is superimposed upon on the corresponding video, and gives each display device with net result.Display device comprises projection, liquid crystal display, CRT monitor and notebook terminal etc.The video acquisition unit control module realizes the parameter control (as time shutter, aperture size, video form, compression standard, mode of operation etc.) to video capture device, communicates control by switch/router in the mode of LAN (Local Area Network), internet or private.The hardware device of system connects as shown in the figure.
In sum, owing to adopted technique scheme, the invention has the beneficial effects as follows: provide a kind of effect double threshold target detection and tracking preferably, in the real system that can use easily.
Description of drawings
The present invention will illustrate by example and with reference to the mode of accompanying drawing, wherein:
Fig. 1 is a detection method process flow diagram of the present invention.
Fig. 2 is a system hardware block diagram of the present invention.
Embodiment
Disclosed all features in this instructions, or the step in disclosed all methods or the process except mutually exclusive feature and/or step, all can make up by any way.
Disclosed arbitrary feature in this instructions (comprising any accessory claim, summary and accompanying drawing) is unless special narration all can be replaced by other equivalences or the alternative features with similar purpose.That is, unless special narration, each feature is an example in a series of equivalences or the similar characteristics.
As Fig. 1, double threshold scene target detection tracking method of the present invention adopts following steps to realize:
Extract the frame of video luminance component, and luminance component is carried out background initialization and renewal;
Extract the luminance component of present frame and carry out difference with background, the grey scale difference image that obtains is designated as GrayDiff;
The average of statistics grey scale difference image GrayDiff
EAnd standard variance
σ
According to the result of calculation in the 3rd step, set low threshold and wealthy family's limit value respectively:
Grey scale difference image GrayDiff is carried out the coarse segmentation result that binaryzation obtains moving target by low threshold and wealthy family's limit value respectively;
Image after the binaryzation is corroded and expansive working removal noise;
Elemental area in the statistics high threshold binary image
AGreater than A
TH All Ranges, and be designated as A
Hi
Each A in the statistics high threshold binary image
Hi Elemental area in the corresponding low threshold binary image zone
AGreater than A
TL All Ranges, and be designated as A
Li
To the A that is labeled as in the low threshold binary image
l The zone is demarcated and as the coordinate of scene target.
As Fig. 2, described system comprises video acquisition module, scene module of target detection, network switch/router, display device; Described video acquisition module comprises local video collecting device and long-distance video collecting device, specifically refers to local video camera and remote camera; Described local video collecting device is connected with Video Decoder by switch/router, and described long-distance video collecting device is connected with Video Decoder by internet or dedicated network; Described display device comprises projection, liquid crystal display, CRT monitor and notebook terminal.
The present invention is not limited to aforesaid embodiment.The present invention expands to any new feature or any new combination that discloses in this manual, and the arbitrary new method that discloses or step or any new combination of process.
Claims (10)
1. double threshold scene target detection tracking method is characterized in that may further comprise the steps realization:
1) extracts the frame of video luminance component, and luminance component is carried out background initialization and renewal;
2) extract the luminance component of present frame and carry out difference with background, the grey scale difference image that obtains is designated as GrayDiff;
3) average of statistics grey scale difference image GrayDiff
EAnd standard variance
σ
4) according to the result of calculation in the 3rd step, set low threshold and wealthy family's limit value respectively;
5) grey scale difference image GrayDiff is carried out the coarse segmentation result that binaryzation obtains moving target by low threshold and wealthy family's limit value respectively;
6) image after the binaryzation is corroded and expansive working removal noise;
7) in the statistics high threshold binary image elemental area A greater than A
TH All Ranges, and be designated as A
Hi
8) each A in the statistics high threshold binary image
Hi Elemental area A is greater than A in the corresponding low threshold binary image zone
TL All Ranges, and be designated as A
Li
9) A to being labeled as in the low threshold binary image
l The zone is demarcated and as the coordinate of scene target.
2. double threshold scene target detection tracking method according to claim 1 is characterized in that in the described step 1), and being meant in advance of described luminance component is transformed into YUV or YCbCr space, sub-department's luminance component Y then with input video.
3. double threshold scene target detection tracking method according to claim 1, it is characterized in that described step 2) in, described difference takes absolute value after being meant difference again, and promptly each data is to be the resulting result that takes absolute value again after the difference by the pixel value of relevant position in each pixel value in the present frame and the initial background among the grey scale difference image GrayDiff.
4. double threshold scene target detection tracking method according to claim 1 is characterized in that in the described step 3) described average
EBe meant the mean value of all pixels among the gray level image GrayDiff, described standard variance
σBe meant each pixel and average among the gray level image GrayDiff
EThe squared difference root sum square.
5. double threshold scene target detection tracking method according to claim 1, it is characterized in that in the described step 4), described low threshold TL and high threshold TH determine according to the statistics characteristics of data among the grey scale difference image GrayDiff, the value that is double threshold is can be arbitrarily, the value of low threshold is chosen the statistical variance that average adds 4 times in the preferred version, and the value variance item coefficient of high threshold is more than 2 times of low threshold coefficient of variation.
6. double threshold scene target detection tracking method according to claim 1, it is characterized in that in the described step 5), described binaryzation is meant that by low threshold and wealthy family's limit value the pixel that is higher than threshold value among the grey scale difference image GrayDiff being put 1 respectively is that the corresponding grey scale value is 255, and it is that the corresponding grey scale value is 0 that the pixel that is lower than threshold value puts 0; After low threshold and high threshold binary conversion treatment, obtain the binary image of low threshold and high threshold correspondence respectively.
7. double threshold scene target detection tracking method according to claim 1 is characterized in that in the described step 6), and the 3x3 template is all adopted in corrosion and expansive working; In the described step 7), in the high threshold binary image, adopt the Canny operator to carry out cutting apart of each zone, reservation and record area are greater than A
TH The zone, A wherein
TH Value be 80 pixels, need to prove regional A
Hi Quantitatively not unique.
8. double threshold scene target detection tracking method according to claim 1 is characterized in that in the described step 8), regional A in the low threshold binary image
Li Be by statistic procedure 7) in the regional A that obtains
Hi The respective pixel area is determined in the low threshold binary image, wherein A
TL Value be 350 pixels, need to prove regional A
Li Quantitatively not unique.
9. double threshold scene target detection tracking method according to claim 1 is characterized in that in the described step 9), and region labeling is meant and calculates inclusion region A
l The upper left corner and the lower right corner coordinate figure of rectangular area, and with this coordinate figure as corresponding fields appearance target coordinate.
10. use the system of double threshold scene target detection tracking method according to claim 1 for one kind, it is characterized in that described system comprises video acquisition module, scene module of target detection, network switch/router, display device; Described video acquisition module comprises local video collecting device and long-distance video collecting device, specifically refers to local video camera and remote camera; Described local video collecting device is connected with Video Decoder by switch/router, and described long-distance video collecting device is connected with Video Decoder by internet or dedicated network; Described display device comprises projection, liquid crystal display, CRT monitor and notebook terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011100790923A CN102129692A (en) | 2011-03-31 | 2011-03-31 | Method and system for detecting motion target in double threshold scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011100790923A CN102129692A (en) | 2011-03-31 | 2011-03-31 | Method and system for detecting motion target in double threshold scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102129692A true CN102129692A (en) | 2011-07-20 |
Family
ID=44267767
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011100790923A Pending CN102129692A (en) | 2011-03-31 | 2011-03-31 | Method and system for detecting motion target in double threshold scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102129692A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102915543A (en) * | 2012-09-12 | 2013-02-06 | 西安电子科技大学 | Figure motion change detecting method based on extracting function and three-channel separation |
CN106448276A (en) * | 2016-07-28 | 2017-02-22 | 南京航空航天大学 | Airport surface moving target detection and speed sequence acquisition method |
CN107103302A (en) * | 2017-04-26 | 2017-08-29 | 重庆邮电大学 | Behavior extracting method based on optimum detection thresholding |
CN108989696A (en) * | 2018-07-11 | 2018-12-11 | 江苏安威士智能安防有限公司 | Automatic explosion method based on temperature figure |
CN109946671A (en) * | 2019-04-12 | 2019-06-28 | 哈尔滨工程大学 | A kind of underwater manoeuvre Faint target detection tracking based on dual-threshold judgement |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030189729A1 (en) * | 2002-04-09 | 2003-10-09 | Samsung Electronics Co., Ltd. | Method and apparatus for converting brightness level of image |
CN1885346A (en) * | 2006-06-01 | 2006-12-27 | 电子科技大学 | Detection method for moving target in infrared image sequence under complex background |
CN1918604A (en) * | 2004-12-15 | 2007-02-21 | 三菱电机株式会社 | Method for modeling background and foreground regions |
CN201360312Y (en) * | 2009-01-09 | 2009-12-09 | 湖南省建筑工程集团总公司 | Monitoring system based on embedded Web video server |
CN101826209A (en) * | 2010-04-29 | 2010-09-08 | 电子科技大学 | Canny model-based method for segmenting three-dimensional medical image |
-
2011
- 2011-03-31 CN CN2011100790923A patent/CN102129692A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030189729A1 (en) * | 2002-04-09 | 2003-10-09 | Samsung Electronics Co., Ltd. | Method and apparatus for converting brightness level of image |
CN1918604A (en) * | 2004-12-15 | 2007-02-21 | 三菱电机株式会社 | Method for modeling background and foreground regions |
CN1885346A (en) * | 2006-06-01 | 2006-12-27 | 电子科技大学 | Detection method for moving target in infrared image sequence under complex background |
CN201360312Y (en) * | 2009-01-09 | 2009-12-09 | 湖南省建筑工程集团总公司 | Monitoring system based on embedded Web video server |
CN101826209A (en) * | 2010-04-29 | 2010-09-08 | 电子科技大学 | Canny model-based method for segmenting three-dimensional medical image |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102915543A (en) * | 2012-09-12 | 2013-02-06 | 西安电子科技大学 | Figure motion change detecting method based on extracting function and three-channel separation |
CN102915543B (en) * | 2012-09-12 | 2015-01-07 | 西安电子科技大学 | Figure motion change detecting method based on extracting function and three-channel separation |
CN106448276A (en) * | 2016-07-28 | 2017-02-22 | 南京航空航天大学 | Airport surface moving target detection and speed sequence acquisition method |
CN107103302A (en) * | 2017-04-26 | 2017-08-29 | 重庆邮电大学 | Behavior extracting method based on optimum detection thresholding |
CN107103302B (en) * | 2017-04-26 | 2020-04-17 | 重庆邮电大学 | Behavior extraction method based on optimal detection threshold |
CN108989696A (en) * | 2018-07-11 | 2018-12-11 | 江苏安威士智能安防有限公司 | Automatic explosion method based on temperature figure |
CN108989696B (en) * | 2018-07-11 | 2021-09-28 | 江苏安威士智能安防有限公司 | Automatic exposure method based on heat map |
CN109946671A (en) * | 2019-04-12 | 2019-06-28 | 哈尔滨工程大学 | A kind of underwater manoeuvre Faint target detection tracking based on dual-threshold judgement |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shi et al. | Revisiting perspective information for efficient crowd counting | |
Zhang et al. | Fast haze removal for nighttime image using maximum reflectance prior | |
CN107256225B (en) | Method and device for generating heat map based on video analysis | |
Huang et al. | An advanced single-image visibility restoration algorithm for real-world hazy scenes | |
WO2018099136A1 (en) | Method and device for denoising image with low illumination, and storage medium | |
CN101236656B (en) | Movement target detection method based on block-dividing image | |
US20120062594A1 (en) | Methods and Systems for Collaborative-Writing-Surface Image Formation | |
Karaman et al. | Comparison of static background segmentation methods | |
US9639943B1 (en) | Scanning of a handheld object for 3-dimensional reconstruction | |
US9542735B2 (en) | Method and device to compose an image by eliminating one or more moving objects | |
CN111062293B (en) | Unmanned aerial vehicle forest flame identification method based on deep learning | |
Chu et al. | Object tracking algorithm based on camshift algorithm combinating with difference in frame | |
Milford et al. | Condition-invariant, top-down visual place recognition | |
KR20180054808A (en) | Motion detection within images | |
Okade et al. | Video stabilization using maximally stable extremal region features | |
CN102129692A (en) | Method and system for detecting motion target in double threshold scene | |
CN109711256B (en) | Low-altitude complex background unmanned aerial vehicle target detection method | |
Feng et al. | Infrared target detection and location for visual surveillance using fusion scheme of visible and infrared images | |
CN107886518B (en) | Picture detection method and device, electronic equipment and readable storage medium | |
Wu et al. | Overview of video-based vehicle detection technologies | |
Siricharoen et al. | Robust outdoor human segmentation based on color-based statistical approach and edge combination | |
CN102006462A (en) | Rapid monitoring video enhancement method by using motion information and implementation device thereof | |
Chen et al. | Moving vehicle detection based on union of three-frame difference | |
Hu et al. | A low illumination video enhancement algorithm based on the atmospheric physical model | |
Othman et al. | Enhanced single image dehazing technique based on HSV color space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20110720 |