CN107248164A - A kind of dynamic background extracting method - Google Patents
A kind of dynamic background extracting method Download PDFInfo
- Publication number
- CN107248164A CN107248164A CN201710464527.3A CN201710464527A CN107248164A CN 107248164 A CN107248164 A CN 107248164A CN 201710464527 A CN201710464527 A CN 201710464527A CN 107248164 A CN107248164 A CN 107248164A
- Authority
- CN
- China
- Prior art keywords
- pixel
- image
- white point
- value
- moving object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30221—Sports video; Sports image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention proposes dynamic background extracting method, comprises the following steps:RGB image is converted into gray-scale map inter-frame difference is done to two frame consecutive images, frame difference image two-value is managed everywhere with threshold value T;An inside is obtained after difference after changing at two-value the UNICOM region in cavity, and then cavity is filled;Obtain the complete UNICOM region in the inside on moving object;Obtain behind moving object region, on this basis, judge that each pixel whether there is moving object on image, define S and represent that pixel does not produce the cumulative frequency of connection change in t, when the pixel does not change for continuous k times, then it is background dot to extract current pixel.The present invention is to be compared method by input picture and background image to split moving target, effectively maintain the integrality of target, resulting result directly reflects position, size, shape and the information of moving target, can obtain accurate moving target, efficiently reduces the influence of interference.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of dynamic background technical scheme.
Background technology
In recent years, developing rapidly with network technology and multimedia technology, the Internet culture using network as carrier just into
The new trend developed for current culture, the thing followed is such as plain text, digital picture, video digital information with several
What step velocity increases, and the life given people brings significant impact.There is mass data in these information, not only including having to people
The information of benefit, also has increasing salaciousness, violence, reaction information.Manually detection is clearly not to the work of these infomation detections
Reality, it is necessary to computer can automatic identification detection.Current character recognition technology relative maturity, therefore, positions and carries
The text message taken out in complicated image and video just has important meaning.
Detected and classified there are three kinds of conventional methods to moving target in one section of video sequence:Optical flow method, interframe
Calculus of finite differences, background subtraction method.Motion detection based on optical flow method employs the light stream field characteristic that moving target is changed over time, no
Need any priori of scene with regard to can efficiently extract with pursuit movement target, but its computational methods is considerably complicated, anti-noise
Poor performance, can not be applied to the real-time processing of full frame video stream if without special hardware unit;Frame differential method profit
The moving region in image is extracted with the difference between front and rear several adjacent two field pictures in image sequence.Neighbor frame difference point-score
Less sensitive to scene changes such as light, stability is good, has stronger adaptivity for dynamic environment, but typically can not
Extract all related feature pixels completely, it is slower in object of which movement, adjacent two field picture have it is overlapping in the case of, extract
Moving object it is less complete, inside movement entity easily produce " cavity " phenomenon.
Therefore, in view of the above-mentioned problems, the present invention is extracted a kind of new technical scheme.
The content of the invention
The present invention goes out a kind of dynamic background extracting method being combined based on inter-frame difference and background subtraction method.
The present invention is achieved through the following technical solutions:
A kind of dynamic background extracting method, comprises the following steps:
RGB image is converted into gray-scale map, one of them given pixel for gray value, its gray value is from time t1Arrive
tnA matrix is represented by, inter-frame difference is done to two frame consecutive images, frame difference image two-value managed everywhere with threshold value T;
An inside is obtained after difference after changing at two-value the UNICOM region in cavity, and then cavity is filled;
The stepFill method for scheming, you are bottom-up, scan successively in the horizontal direction from left to right first
White point, it is then right again if the distance threshold d for being smaller than setting of adjacent white point, just connects being aligned by this adjacent white point
Image scans white point in vertical direction successively from left to right, Down-Up, if the distance for being smaller than setting of adjacent white point
Threshold value d, just connects being aligned by this adjacent white point, so can obtain the complete UNICOM region in the inside on moving object;
Obtain behind moving object region, on this basis, judge that each pixel whether there is moving object on image, define S
Represent that pixel does not produce the cumulative frequency of connection change in t, when the pixel does not change for continuous k times,
It is background dot then to extract current pixel, and preserves the gray value.
Further, for fixed pixel, using current background image and the weighting present image sum of weighting
It is used as the background image after renewal.
Further, adjacent white point is connected into the stain that being aligned is as really filled between white point with white point in the step.
Further, the stepWhen pixel is black, represent unchanged, then S adds 1, otherwise S is reset.
Further, the weights are 0.1.
The beneficial effects of the invention are as follows:The present invention is to be compared method segmentation motion by input picture and background image
Target, effectively maintains the integrality of target, and resulting result directly reflects the position of moving target, size, shape
And information, accurate moving target can be obtained, and efficiently reduce the influence of interference.
Embodiment
The present invention is described further with reference to embodiment.
Embodiment 1
A kind of dynamic background extracting method, comprises the following steps:
In background extracting, for the ease of handling and storing, realize that background is carried by the gray scale Value Operations to image slices vegetarian refreshments
Take.And most camera system is all based on RGB color at present, each pixel is a three-dimensional in rgb space
Vector, represents the gray scale of three kinds of colors of red, green, blue, RGB image is converted into gray-scale map respectively, for gray value one of them
Given pixel, its gray value is from time t1To tnA matrix is represented by, inter-frame difference is done to two frame consecutive images, with threshold
Value T is managed frame difference image two-value everywhere;
An inside is obtained after difference after changing at two-value the UNICOM region in cavity, and then cavity is filled;
The stepFill method for scheming, you are bottom-up, scan successively in the horizontal direction from left to right first
White point, it is then right again if the distance threshold d for being smaller than setting of adjacent white point, just connects being aligned by this adjacent white point
Image scans white point in vertical direction successively from left to right, Down-Up, if the distance for being smaller than setting of adjacent white point
Threshold value d, just connects being aligned by this adjacent white point, so can obtain the complete UNICOM region in the inside on moving object;
Obtain behind moving object region, on this basis, judge that each pixel whether there is moving object on image, define S
Represent that pixel does not produce the cumulative frequency of connection change in t, when the pixel does not change for continuous k times,
It is background dot then to extract current pixel, and preserves the gray value.
In the present embodiment, it is contemplated that the influence of environment, light, air, video camera etc., to improve to minor variations robust
Property, for having confirmed that the pixel for background, for fixed pixel, is worked as using the current background image and weighting of weighting
Preceding image sum is used as the background image after renewal.
In the present embodiment, it is describedIt is really to be filled with white point between white point that adjacent white point is connected into being aligned in step
Stain.
In the present embodiment, stepWhen pixel is black, represent unchanged, then S adds 1, otherwise S is reset.
In the present embodiment, because weights are related to renewal speed, for the system, appropriate weights have been determined, weights are
0.1。
The present invention is to be compared method by input picture and background image to split moving target, effectively maintains mesh
Target integrality, resulting result directly reflects position, size, shape and the information of moving target, can obtain accurate
Moving target, and efficiently reduce the influence of interference.
Claims (5)
1. a kind of dynamic background extracting method, it is characterised in that:Comprise the following steps:
RGB image is converted into gray-scale map, one of them given pixel for gray value, its gray value is from time t1Arrive
tnA matrix is represented by, inter-frame difference is done to two frame consecutive images, frame difference image two-value managed everywhere with threshold value T;
An inside is obtained after difference after changing at two-value the UNICOM region in cavity, and then cavity is filled;
The stepFill method for scheming, you are bottom-up, it is white to scan successively in the horizontal direction from left to right first
Point, if the distance threshold d for being smaller than setting of adjacent white point, just connects being aligned by this adjacent white point, then again to figure
As from left to right, Down-Up scanning white point successively in vertical direction, if adjacent white point be smaller than setting apart from threshold
Value d, just connects being aligned by this adjacent white point, so can obtain the complete UNICOM region in the inside on moving object;
Obtain behind moving object region, on this basis, judge that each pixel whether there is moving object on image, define S
Represent that pixel does not produce the cumulative frequency of connection change in t, when the pixel does not change for continuous k times,
It is background dot then to extract current pixel, and preserves the gray value.
2. a kind of dynamic background extracting method according to claim 1, it is characterised in that:For fixed pixel, adopt
The background image after renewal is used as with the current background image and weighting present image sum of weighting.
3. a kind of dynamic background technical method according to claim 1, it is characterised in that:It is describedWill be adjacent white in step
Even being aligned is the stain really filled between white point with white point to point.
4. a kind of dynamic background extracting method according to claim 1, it is characterised in that:The stepWhen pixel is black
During color, represent unchanged, then S adds 1, otherwise S is reset.
5. a kind of dynamic background extracting method according to claim 2, it is characterised in that:The weights are 0.1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710464527.3A CN107248164A (en) | 2017-06-19 | 2017-06-19 | A kind of dynamic background extracting method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710464527.3A CN107248164A (en) | 2017-06-19 | 2017-06-19 | A kind of dynamic background extracting method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107248164A true CN107248164A (en) | 2017-10-13 |
Family
ID=60018211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710464527.3A Withdrawn CN107248164A (en) | 2017-06-19 | 2017-06-19 | A kind of dynamic background extracting method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107248164A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111723644A (en) * | 2020-04-20 | 2020-09-29 | 北京邮电大学 | Method and system for detecting occlusion of surveillance video |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080187219A1 (en) * | 2007-02-05 | 2008-08-07 | Chao-Ho Chen | Video Object Segmentation Method Applied for Rainy Situations |
CN101621615A (en) * | 2009-07-24 | 2010-01-06 | 南京邮电大学 | Self-adaptive background modeling and moving target detecting method |
CN107316314A (en) * | 2017-06-07 | 2017-11-03 | 太仓诚泽网络科技有限公司 | A kind of dynamic background extracting method |
-
2017
- 2017-06-19 CN CN201710464527.3A patent/CN107248164A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080187219A1 (en) * | 2007-02-05 | 2008-08-07 | Chao-Ho Chen | Video Object Segmentation Method Applied for Rainy Situations |
CN101621615A (en) * | 2009-07-24 | 2010-01-06 | 南京邮电大学 | Self-adaptive background modeling and moving target detecting method |
CN107316314A (en) * | 2017-06-07 | 2017-11-03 | 太仓诚泽网络科技有限公司 | A kind of dynamic background extracting method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111723644A (en) * | 2020-04-20 | 2020-09-29 | 北京邮电大学 | Method and system for detecting occlusion of surveillance video |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106952269B (en) | The reversible video foreground object sequence detection dividing method of neighbour and system | |
US20210004962A1 (en) | Generating effects on images using disparity guided salient object detection | |
CN107016691B (en) | Moving target detecting method based on super-pixel feature | |
US8073203B2 (en) | Generating effects in a webcam application | |
CN101520904B (en) | Reality augmenting method with real environment estimation and reality augmenting system | |
JP6546611B2 (en) | Image processing apparatus, image processing method and image processing program | |
JP4373840B2 (en) | Moving object tracking method, moving object tracking program and recording medium thereof, and moving object tracking apparatus | |
CN107203970A (en) | A kind of video-splicing method based on dynamic optimal suture | |
US9489588B2 (en) | Matting method for extracting foreground object and apparatus for performing the matting method | |
US12118810B2 (en) | Spatiotemporal recycling network | |
EP2813973A1 (en) | Method and system for processing video image | |
CN103034983A (en) | Defogging method based on anisotropic filtering | |
JP5640622B2 (en) | Method for classifying red-eye object candidates, computer-readable medium, and image processing apparatus | |
US20110085026A1 (en) | Detection method and detection system of moving object | |
CN111583357A (en) | Object motion image capturing and synthesizing method based on MATLAB system | |
CN113269790B (en) | Video clipping method, device, electronic equipment, server and storage medium | |
CN107248164A (en) | A kind of dynamic background extracting method | |
CN107316314A (en) | A kind of dynamic background extracting method | |
WO2016199418A1 (en) | Frame rate conversion system | |
CN115297313A (en) | Micro-display dynamic compensation method and system | |
CN113792629A (en) | Helmet wearing detection method and system based on deep neural network | |
CN113727176A (en) | Video motion subtitle detection method | |
CN111583341A (en) | Pan-tilt camera displacement detection method | |
US20070092158A1 (en) | Image processing method and image processing apparatus | |
KR101706347B1 (en) | Method for shot boundary detection, and image processing apparatus and method implementing the same method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20171013 |