CN108038866A - A kind of moving target detecting method based on Vibe and disparity map Background difference - Google Patents
A kind of moving target detecting method based on Vibe and disparity map Background difference Download PDFInfo
- Publication number
- CN108038866A CN108038866A CN201711400664.7A CN201711400664A CN108038866A CN 108038866 A CN108038866 A CN 108038866A CN 201711400664 A CN201711400664 A CN 201711400664A CN 108038866 A CN108038866 A CN 108038866A
- Authority
- CN
- China
- Prior art keywords
- mrow
- pixel
- background
- msub
- disparity map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000001514 detection method Methods 0.000 claims abstract description 52
- 230000008030 elimination Effects 0.000 claims description 4
- 238000003379 elimination reaction Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 claims description 3
- 239000007787 solid Substances 0.000 claims description 3
- 230000033001 locomotion Effects 0.000 abstract description 11
- 238000005286 illumination Methods 0.000 abstract description 6
- 230000000694 effects Effects 0.000 abstract description 3
- 230000009012 visual motion Effects 0.000 abstract 1
- 230000008859 change Effects 0.000 description 9
- 239000000284 extract Substances 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001027 hydrothermal synthesis Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20228—Disparity calculation for image-based rendering
Abstract
The invention discloses a kind of moving target detecting method based on Vibe and disparity map background method, it is related to computer vision field.This method establishes Gauss model according to parallax graphic sequence first, and moving object detection is carried out using disparity map Background difference;The moving object detection based on improved Vibe algorithms is used again;Finally the result of two moving object detections is carried out obtaining last motion target area with operation, continue to update background model, the moving object detection of next frame can be carried out.Present invention incorporates the object detection method based on monocular vision and the object detection method based on binocular vision, extractable complete fortune work(target, the problem of illumination and shadow effect are easily subject to based on monocular vision motion target detection is improved, while ghost phenomenon can be eliminated.
Description
Technical field
The present invention relates to computer vision field, refers in particular to a kind of based on the inspection of the moving target of Vibe and disparity map Background difference
Survey method.
Background technology
Moving object detection is to realize the basis of target recognition and tracking.Quickly and accurately detect moving target, favorably
In the progress of the follow-up works such as target following, identification and behavior understanding, led in iris, recognition of face, security monitoring, robot
Boat, aircraft and satellite monitoring system etc. have a wide range of applications.
Wherein, moving object detection algorithm has three kinds of optical flow method, frame differential method and background subtraction.Optical flow method needs special
The hardware supported of door, calculates complicated and computationally intensive, general less use.Frame differential method principle is simple, to noise and light
Change insensitive, but cavitation easily occur in object detection results.Background subtraction can extract the complete information of target,
But it is susceptible to the influence of the dynamic change of the outer scenes such as illumination.Slowly varying switch to what is quickly moved when moving target has
When, background is easily appeared region by background subtraction, and (i.e. current background also leaves the moving target information of previous frame, but move
Target is not at this time in the region) prospect is detected as, occur " shadow " phenomenon, and to making an uproar in having the complex scenes such as branch swing
The treatment effect of sound is bad, poor to the adaptability of environment, and such flase drop can cause difficulty to follow-up target following.Tradition
The moving target detecting method based on monocular vision can detect the profile of moving target, but be susceptible to external condition
Influence, can be prospect shade and part background detection.
The content of the invention
It is an object of the invention to overcome above-mentioned the deficiencies in the prior art, propose that one kind is based on Vibe and disparity map background subtraction
The moving target detecting method of method.This method is a kind of method for testing motion of binocular vision, can extract complete movement mesh
Mark, eliminates the ghost occurred in motion detection process, and is improved using based on disparity map Background difference based on monocular vision
The easy illumination of motion detection and the influence of shade.
To achieve the above object, technical solution of the present invention specifically includes following steps:
S1, using left and right two camera acquisitions image, carry on the back based on disparity map under parallel Binocular Stereo Vision System
The sport foreground detection of scape difference method;
S2 establishes Vibe background moulds using the last frame left image of all left images of step S1.1 solution disparity maps
Type, and utilize improved Vibe algorithms extraction moving foreground object;
S3 carries out step S1.2 and S2.2 result to obtain moving object detection as a result, updating background model again with operation,
Continue the moving object detection of new picture frame.
Further, include as the preferred technical solution of the present invention, the S1:
S1.1 is directed to the sequence of left-right images that left and right cameras collects, and tries to achieve the left images pair of synchronization collection
Disparity map, initial back-ground model is established using disparity map;
S1.2 gathers next frame left images, solves the disparity map of the left images, is carried out using disparity map Background difference
Foreground target detects.
Further, include as the preferred technical solution of the present invention, the S1.1:Asked using census solid matching methods
Obtain the left image f mutually gathered in the same timel,i(1≤i≤n) and right image fr,iThe disparity map B of (1≤i≤n)i(1≤i≤n), obtains
To background parallax graphic sequence B1,B2,...Bn, and establish single Gaussian statistics background model using background parallax graphic sequence;Background regards
The mean μ of pixel (x, y) in poor figure0(x, y) and varianceRespectively:
Wherein, Bi(x, y) is disparity map BiIn the parallax value that pixel (x, y) goes out.
Further, include as the preferred technical solution of the present invention, the step S1.2:Assuming that any time t moment is adopted
The left images of collection are respectively fl,tAnd fr,t, it is B to try to achieve disparity map using census Stereo Matching Algorithmst, carried on the back using disparity map
Scape difference method carries out foreground target detection, its detection formula is as follows:
In above formula, Dt(x, y) is the testing result of t moment pixel (x, y), and 1 to represent pixel (x, y) be foreground point, 0
It is background dot to represent pixel (x, y);Bt(x, y) is parallax value of the t moment background parallax figure at pixel (x, y) place;μt(x,
Y) for pixel (x, y) Gauss model average;σt(x, y) is the standard deviation of the Gauss model of pixel (x, y);It is if current
Frame is the first two field picture after initial model foundation, then μt(x, y) is μ0(x,y);σt(x, y) is σ0(x,y)。
Further, include as the preferred technical solution of the present invention, the S2:
S2.1 establishes Vibe backgrounds using the last frame left image of all left images of step S1.1 solution disparity maps
Model;
S2.2 is since next frame left image, detection moving foreground object and elimination ghost.
Further, include as the preferred technical solution of the present invention, the detection moving foreground object:
Moving target is detected since the second frame, with pixel x in pixel value v (x) for the center of circle, R is radius, wound
Build the region S of two-dimentional Euclid's chrominance spaceR(v (x)), region SRThe background sample value of pixel x is included in (v (x))
Number be # { SR(v(x))∩{v1,v2,...,vN}};
Wherein,
In above formula, k is the number of the pixel value compared with pixel p in background model, and v (p) is pixel p position in present frame
The pixel value at place, viFor the pixel value of pixel p background model;
Set a threshold value #minIf # { SR(v(x))∩{v1,v2,...,vNIt is more than or equal to threshold value #min, then exist
In present frame, which is background pixel;If # { SR(v(x))∩{v1,v2,...,vNIt is less than threshold value #min, then the pixel
For foreground pixel.
Further, include as the preferred technical solution of the present invention, the elimination ghost:
(1) optimal segmenting threshold of present frame is calculated;
Assuming that the gray level of current image frame is L, tonal range is [0, L-1], and segmentation threshold t can be by image point
For the region A that gray level is [0, t] and the region B that gray level is [t+1, L-1], wherein A, B represents prospect and the back of the body respectively
Scape;
Inter-class variance is:
σ2=ω0(μ0-μ)2+ω1(μ0-μ1)2=ω0ω1(μ0-μ1)2
Wherein, ω0Represent that foreground pixel is counted out and account for the ratio of entire image, foreground pixel average gray value is μ0, ω1
Represent that background pixel is counted out and account for the ratio of entire image, background pixel average gray value is μ1, the average gray of entire image
It is worth for μ;
Work as σ2Gray value when obtaining maximum is optimal threshold:
(2) secondary discrimination is carried out to moving target pixel;
M are randomly selected in the background pixel point that detection obtains, the average value for calculating the M pixel gray level is
Assuming that f (x) is the foreground pixel detected, judgment rule is:
IfAs f (x) > t*When, f (x) is judged as prospect again;As f (x)≤t*When, f (x) judges again
For background;
IfAs f (x) < t*When, f (x) is judged as prospect again;As f (x) >=t*When, f (x) is judged as again
Background.
Further, as the preferred technical solution of the present invention, in the step S3, the renewal background technology includes parallax
The renewal of background model and the renewal of Vibe background models.
Compared with prior art, the invention has the advantages that:
1) moving object detection based on disparity map Background difference of the invention can be extracted from the influence of illumination variation
Complete moving target, can eliminate motion detection is influenced by shadow region.
2) present invention utilizes improved Vibe algorithms, extracts more accurate moving region modified hydrothermal process and make use of Vibe
The Pixel-level of algorithm judges characteristic and Otsu algorithms for image overall permanence to eliminate the ghost occurred in motion detection process.
3) present invention combines the base in the moving object detection and binocular vision based on Vibe algorithms in monocular vision
In the target detection of the Background difference of disparity map, be extracted complete moving target, effectively prevent during target detection by
To illumination and the influence of shade, ghost phenomenon is eliminated.
Brief description of the drawings
Fig. 1 is moving target detecting method flow chart in the present embodiment.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other without making creative work
Embodiment, belongs to the scope of protection of the invention.
A kind of moving target detecting method based on Vibe and disparity map Background difference of the present invention, its concrete operations flow
As shown in Figure 1, mainly including following two big step S1-S3, describe in detail below to step S1-S3:
S1, using left and right two camera acquisitions image, carry on the back based on disparity map under parallel Binocular Stereo Vision System
The sport foreground detection of scape difference method;
Traditional sport foreground detection based on monocular vision is easily influenced be subject to light change, by dash area as
Sport foreground, and light sudden change will not have an impact the acquisition of disparity map, therefore it is synchronous present invention employs left and right cameras
Image Acquisition is carried out, and initial back-ground model is established using using disparity map, S1 specifically includes following steps;
S1.1 is directed to the sequence of left-right images that left and right cameras collects, and tries to achieve the left images pair of synchronization collection
Disparity map, initial back-ground model is established using disparity map;
Assuming that left camera acquisition is to left image sequence:fl,1,fl,2,...fl,n, with the corresponding right side of left image sequence
Camera acquisition to right image sequence be:fr,1,fr,2,...fr,n, then when trying to achieve identical using census solid matching methods
Carve the left image f of collectionl,i(1≤i≤n) and right image fr,iThe disparity map B of (1≤i≤n)i(1≤i≤n), obtains background and regards
Poor graphic sequence B1,B2,...Bn, and establish single Gaussian statistics background model, the single Gaussian statistics of dynamic using background parallax graphic sequence
The foundation of background model can preferably overcome external environmental change to being influenced caused by target detection, picture in background parallax figure
The mean μ of vegetarian refreshments (x, y)0(x, y) and varianceRespectively:
Wherein, Bi(x, y) is disparity map BiIn the parallax value that pixel (x, y) goes out.
S1.2 gathers next frame left images, solves the disparity map of the left images, is carried out using disparity map Background difference
Foreground target detects;
Assuming that the left images of any time t moment collection are respectively fl,tAnd fr,t, utilize census Stereo Matching Algorithms
It is B to try to achieve disparity mapt, foreground target detection is carried out using disparity map Background difference, its detection formula is as follows:
In above formula, Dt(x, y) is the testing result of t moment pixel (x, y), and 1 to represent pixel (x, y) be foreground point, 0
It is background dot to represent pixel (x, y);Bt(x, y) is parallax value of the t moment background parallax figure at pixel (x, y) place;μt(x,
Y) for pixel (x, y) Gauss model average;σt(x, y) is the standard deviation of the Gauss model of pixel (x, y).It is if current
Frame is the first two field picture after initial model foundation, then μt(x, y) is μ0(x,y),σt(x, y) is σ0(x,y)。
S2 establishes Vibe background moulds using the last frame left image of all left images of step S1.1 solution disparity maps
Type, moving foreground object is extracted using improved Vibe algorithms;
Vibe algorithms have the advantages that movement velocity is very fast, Objective extraction accuracy rate is high, therefore the present invention is to Vibe algorithms
It is improved and is used for the extraction of moving foreground object, it is mainly comprised the following steps:
S2.1 establishes Vibe backgrounds using the last frame left image of all left images of step S1.1 solution disparity maps
Model;
The Vibe algorithms of the present invention establish last of all left images used in Gauss initial model using step S1.1
Frame left image is initialized, and introduces the method for neighborhood to establish corresponding background set to each pixel.Define picture
Background pixel value at vegetarian refreshments x is v (x), and N number of pixel value v is randomly selected in 8 neighborhoods of each pixel x1,v2,...,vN
As the background model sample value of pixel x, if background model is M (x), then have:
M (x)={ v1,v2,...,vN} (4)
The Vibe algorithms of the present invention are using first two field picture initial background model, for every in pixel background sample space
One sample value, one pixel value of random selection initializes it from the sample value pixel and its neighborhood territory pixel.In head
In two field picture, 8 neighborhood Ns of the y values in pixel xG(x) randomly choosed in the sample point in, make v0(y) two field picture is at y headed by
Pixel value, then the background model after being initialized, can be represented by the formula:
M0(x)={ v0(y)|y∈NG(x)} (5)
Wherein, M0(x) it is the background model after initialization.
S2.2 is since the next frame left image for establishing background model, detection moving foreground object and elimination ghost;
The classification of the background and prospect of Vibe algorithms of the S2.2.1 based on adaptive threshold;
Since the next frame left image after establishing initial back-ground model, moving target is detected.With pixel x
It is the center of circle in pixel value v (x), R is radius, creates the sphere S of two-dimentional Euclid's chrominance spaceR(v (x)), for inciting somebody to action
The pixel value that pixel x is in a new two field picture is contrasted with the background sample value at the point, and pixel is classified.Vibe is calculated
When method carries out foreground detection, judge whether sample value matches with current pixel value in background model, using radii fixus threshold value R.
When R settings are larger, background will be detected with the foreground pixel of background pixel value relatively, causes the movement mesh detected
Mark cannot be detected completely.When R settings are smaller, the dynamic change part that is not intended in background to be detected (such as leaf,
Branch etc.) it will be detected, cause occur more noise in testing result.
Therefore, in order to improve the accuracy of detection, the method for the present invention is each pixel placement according to the concrete condition of pixel
One threshold value R, the setting method of threshold value R are as follows:
In above formula, k is the number of the pixel value compared with pixel p in background model;V (p) is that pixel p is position in present frame
Put the pixel value at place;viFor the pixel value of pixel p background model.
There is situation that is excessive and too small and causing testing result inaccuracy in threshold value R in order to prevent, and the present invention sets threshold
The upper and lower bound of value R, specific given threshold R ∈ [20,40], i.e., when the threshold value R tried to achieve by formula (6) is less than 20, setting
Threshold value R is 20, and when the threshold value R tried to achieve by formula (6) is more than 40, given threshold R is 40.
Further, a region S is definedR(v (x)), region SRThe background sample value of pixel x is included in (v (x))
Number be # { SR(v(x))∩{v1,v2,...,vN, with
#{SR(v(x))∩{v1,v2,...,vNSize judge pixel be foreground pixel or background pixel.Initially
Change # { SR(v(x))∩{v1,v2,...,vNIt is 0, the threshold value for setting judgement pixel as foreground pixel or background pixel is
#min, its value is set as 2.If # { SR(v(x))∩{v1,v2,...,vNIt is more than or equal to threshold value #min, then in present frame
In, which is background pixel;If # { SR(v(x))∩{v1,v2,...,vNIt is less than threshold value #min, then the pixel is prospect
Pixel.
S2.2.2 combination foreground detection results and Otsu threshold methods carry out secondary judgement to eliminate ghost;
Ghost refers to the foreground area for not corresponding to actual motion target, it is transported suddenly by original static object in background
Move so as to cause background model and real background inconsistent caused.When the object in background moves suddenly, object is original
The region that position can be covered originally by object substitutes, and this change can reflect immediately in ensuing image sequence, and
Background model can't reflect this change immediately.The problem of thus causing the background model of a period of time to fail, this will
Flase drop is produced in the original position of object, the moving target being not present is detected, so as to ghost phenomenon occur.Asked for ghost
Topic, using ghost is suppressed with reference to foreground detection result and the secondary judgement of Otsu threshold methods progress, it is mainly walked the present invention
Suddenly it is:
(1) optimal segmenting threshold of present frame is calculated;
Assuming that the gray level of current image frame is L, tonal range is [0, L-1], and segmentation threshold t can be by image point
For the region A that gray level is [0, t] and the region B that gray level is [t+1, L-1], wherein A, B represent respectively foreground area with
And background area.
Inter-class variance is:
σ2=ω0(μ0-μ)2+ω1(μ0-μ1)2=ω0ω1(μ0-μ1)2 (7)
Wherein, ω0Represent that foreground pixel is counted out and account for the ratio of entire image;Foreground pixel point average gray value is μ0;
ω1Represent that background pixel is counted out and account for the ratio of entire image;Background pixel point average gray value is μ1;Entire image is averaged
Gray value is μ.When inter-class variance is bigger, then two differentiation in different regions are bigger, can preferably carry out the segmentation of image.Therefore, σ is worked as2
Gray value when obtaining maximum is optimal threshold, and optimal threshold is represented by following formula:
(2) secondary discrimination is carried out to moving target pixel.
M are randomly selected in the background pixel point that detection obtains in step (1), calculates being averaged for these pixel gray levels
It is worth and isAssuming that f (x) is the foreground pixel that step (1) detects, judgment rule is:
IfAs f (x) > t*When, f (x) is judged as prospect again;As f (x)≤t*When, f (x) judges again
For background.
IfAs f (x) < t*When, f (x) is judged as prospect again;As f (x) >=t*When, f (x) is judged as again
Background.
The prospect detected using the step (2) to the step (1) carries out secondary discrimination, can filter out the portion of erroneous judgement
Point, ghost part is determined as background again.
S3 carries out obtaining moving object detection as a result, obtaining the fortune of picture frame with operation to the result of step S1.2 and S2.2
Moving-target testing result;Then background model is updated again, continues the moving object detection of new picture frame.
S3.1 is carried out and operated to step S1.2 and S2.2 result;
S3.2 updates background model, it includes the renewal of parallax background model and the renewal of Vibe background models;
S3.2.1 updates parallax background model;
Over time, some changes inevitably occur for background, and for such case, therefore the present invention adopts
Real-time update is carried out to background with adaptive context update model, specifically, when t moment pixel (x, y) passes through step
Formula (3) is judged as background dot in S1.2, then the parameter of Gauss model is updated by formula below:
μt+1(x, y)=(1- α) μt(x,y)+αBt(x,y) (9)
Wherein, α context updates rate, value 0.03.
S3.2.2 updates Vibe background models;
For the background pixel by being detected in step S2.2, using the background update method of Vibe algorithms to background model
It is updated.
After background model renewal, the method continued with step S1.2 and step S2.2 and step S3 continues S3.3
Moving object detection in follow-up new picture frame.
The method of the present invention combines the object detection method based on monocular vision and the target detection side based on binocular vision
Method, overcomes the problem of target detection of traditional monocular vision is easily subject to illumination and shadow effect, eliminates target detection
During ghost phenomenon.A kind of moving target detecting method based on Vibe and disparity map Background difference proposed in the present invention
FPGA realizations can be actually embedded in, apply to have in the video camera of motion target tracking.
Those skilled in the art will be clear that the scope of the present invention is not restricted to example discussed above, it is possible to which it is carried out
Some changes and modification, the scope of the present invention limited without departing from the appended claims.Although oneself is through in attached drawing and explanation
The present invention is illustrated and described in book in detail, but such illustrate and describe only is explanation or schematical, and it is nonrestrictive.
The present invention is not limited to the disclosed embodiments.
Claims (6)
1. a kind of moving target detecting method based on Vibe and disparity map Background difference, it is characterised in that comprise the following steps:
S1 is under parallel Binocular Stereo Vision System, using left and right two camera acquisitions image, and carries out being based on disparity map background
The sport foreground detection of poor method;
S1.1 is directed to the sequence of left-right images that left and right cameras collects, and tries to achieve the parallax of the left images pair of synchronization collection
Figure, initial back-ground model is established using disparity map;
S1.2 gathers next frame left images, solves the disparity map of the left images, and prospect is carried out using disparity map Background difference
Target detection;
S2 establishes Vibe background models using the last frame left image of all left images of step S1.1 solution disparity maps, profit
Moving foreground object is extracted with improved Vibe algorithms;
S2.1 establishes Vibe background models using the last frame left image of all left images of step S1.1 solution disparity maps;
S2.2 is since next frame left image, detection moving foreground object and elimination ghost;
S3 carries out step S1.2 and S2.2 result to obtain moving object detection as a result, updating background model again with operation, continues
The moving object detection of new picture frame.
2. a kind of moving target detecting method based on Vibe and disparity map Background difference according to claim 1, it is special
Sign is that the S1.1 includes:The left image f mutually gathered in the same time is tried to achieve using census solid matching methodsl,i(1≤i≤
And right image f n)r,iThe disparity map B of (1≤i≤n)i(1≤i≤n), obtains background parallax graphic sequence B1,B2,...Bn, and utilize
Background parallax graphic sequence establishes single Gaussian statistics background model;The mean μ of pixel (x, y) in background parallax figure0(x, y) and side
DifferenceRespectively:
<mrow>
<msub>
<mi>&mu;</mi>
<mn>0</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>n</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<msub>
<mi>B</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mn>0</mn>
<mn>2</mn>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>n</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<msup>
<mrow>
<mo>&lsqb;</mo>
<msub>
<mi>B</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mn>0</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
Wherein, Bi(x, y) is disparity map BiIn the parallax value that pixel (x, y) goes out.
3. a kind of moving target detecting method based on Vibe and disparity map Background difference according to claim 2, it is special
Sign is that the step S1.2 includes:Assuming that the left images of any time t moment collection are respectively fl,tAnd fr,t, utilize
It is B that census Stereo Matching Algorithms, which try to achieve disparity map,t, foreground target detection is carried out using disparity map Background difference, it detects public
Formula is as follows:
<mrow>
<msub>
<mi>D</mi>
<mi>t</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>1</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<msub>
<mi>B</mi>
<mi>t</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mi>t</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>></mo>
<mn>2.5</mn>
<msub>
<mi>&sigma;</mi>
<mi>t</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>0</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>e</mi>
<mi>l</mi>
<mi>s</mi>
<mi>e</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
In above formula, Dt(x, y) is the testing result of t moment pixel (x, y), and 1 to represent pixel (x, y) be foreground point, 0 representative
Pixel (x, y) is background dot;Bt(x, y) is parallax value of the t moment background parallax figure at pixel (x, y) place;μt(x, y) is
The average of the Gauss model of pixel (x, y);σt(x, y) is the standard deviation of the Gauss model of pixel (x, y);If present frame is
Initial model establish after first two field picture, then μt(x, y) is μ0(x,y);σt(x, y) is σ0(x,y)。
4. a kind of moving target detecting method based on Vibe and disparity map Background difference according to claim 3, it is special
Sign is that moving foreground object is detected in the step 2.2 to be included:
Moving target is detected since the second frame, with pixelxIt is the center of circle in pixel value v (x), R is radius, creates one
The region S of a two dimension Euclid's chrominance spaceR(v (x)), region SROf background sample value comprising pixel x in (v (x))
Number is # { SR(v(x))∩{v1,v2,...,vN}};
Wherein,
In above formula, k is the number of pixel value compared with pixel p in background model, and v (p) is in present frame at pixel p position
Pixel value, viFor the pixel value of pixel p background model;
Set a threshold value #minIf # { SR(v(x))∩{v1,v2,...,vNIt is more than or equal to threshold value #min, then in present frame
In, which is background pixel;If # { SR(v(x))∩{v1,v2,...,vNIt is less than threshold value #min, then the pixel is prospect
Pixel.
5. a kind of moving target detecting method based on Vibe and disparity map Background difference according to claim 4, it is special
Sign is that ghost is eliminated in the step 2.2 to be included:
(1) optimal segmenting threshold of present frame is calculated;
Assuming that the gray level of current image frame is L, tonal range is [0, L-1], and segmentation threshold t, can be divided into ash by image
The region B that level is the region A of [0, t] and gray level is [t+1, L-1] is spent, wherein A, B represents prospect and background respectively;
Inter-class variance is:
σ2=ω0(μ0-μ)2+ω1(μ0-μ1)2=ω0ω1(μ0-μ1)2
Wherein, ω0Represent that foreground pixel is counted out and account for the ratio of entire image, foreground pixel average gray value is μ0, ω1Represent
Background pixel, which is counted out, accounts for the ratio of entire image, and background pixel average gray value is μ1, the average gray value of entire image is
μ;
Work as σ2Gray value when obtaining maximum is optimal threshold:
<mrow>
<msup>
<mi>t</mi>
<mo>*</mo>
</msup>
<mo>=</mo>
<mi>A</mi>
<mi>r</mi>
<mi>g</mi>
<munder>
<mrow>
<mi>M</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
<mrow>
<mn>0</mn>
<mo>&le;</mo>
<mi>t</mi>
<mo>&le;</mo>
<mi>L</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munder>
<mo>&lsqb;</mo>
<msub>
<mi>&omega;</mi>
<mn>0</mn>
</msub>
<msub>
<mi>&omega;</mi>
<mn>1</mn>
</msub>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>&mu;</mi>
<mn>0</mn>
</msub>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>&rsqb;</mo>
<mo>;</mo>
</mrow>
(2) secondary discrimination is carried out to moving target pixel;
M are randomly selected in the background pixel point that detection obtains, the average value for calculating the M pixel gray level isAssuming that f
(x) it is for the foreground pixel detected, judgment rule:
IfAs f (x) > t*When, f (x) is judged as prospect again;As f (x)≤t*When, f (x) is judged as carrying on the back again
Scape;
IfAs f (x) < t*When, f (x) is judged as prospect again;As f (x) >=t*When, f (x) is judged as carrying on the back again
Scape.
6. a kind of moving target detecting method based on Vibe and disparity map Background difference according to claim 1, it is special
Sign is, in the step S3, the renewal background technology includes updating with Vibe background models more for parallax background model
Newly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711400664.7A CN108038866A (en) | 2017-12-22 | 2017-12-22 | A kind of moving target detecting method based on Vibe and disparity map Background difference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711400664.7A CN108038866A (en) | 2017-12-22 | 2017-12-22 | A kind of moving target detecting method based on Vibe and disparity map Background difference |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108038866A true CN108038866A (en) | 2018-05-15 |
Family
ID=62100269
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711400664.7A Pending CN108038866A (en) | 2017-12-22 | 2017-12-22 | A kind of moving target detecting method based on Vibe and disparity map Background difference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108038866A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898057A (en) * | 2018-05-25 | 2018-11-27 | 广州杰赛科技股份有限公司 | Track method, apparatus, computer equipment and the storage medium of target detection |
CN109308709A (en) * | 2018-08-14 | 2019-02-05 | 昆山智易知信息科技有限公司 | Vibe moving object detection algorithm based on image segmentation |
CN109684946A (en) * | 2018-12-10 | 2019-04-26 | 成都睿码科技有限责任公司 | A kind of kitchen mouse detection method based on the modeling of single Gaussian Background |
CN110060278A (en) * | 2019-04-22 | 2019-07-26 | 新疆大学 | The detection method and device of moving target based on background subtraction |
CN110084129A (en) * | 2019-04-01 | 2019-08-02 | 昆明理工大学 | A kind of river drifting substances real-time detection method based on machine vision |
CN110111346A (en) * | 2019-05-14 | 2019-08-09 | 西安电子科技大学 | Remote sensing images semantic segmentation method based on parallax information |
CN110580709A (en) * | 2019-07-29 | 2019-12-17 | 浙江工业大学 | Target detection method based on ViBe and three-frame differential fusion |
CN110599523A (en) * | 2019-09-10 | 2019-12-20 | 江南大学 | ViBe ghost suppression method fused with interframe difference method |
CN111814602A (en) * | 2020-06-23 | 2020-10-23 | 成都信息工程大学 | Intelligent vehicle environment dynamic target detection method based on vision |
CN113139521A (en) * | 2021-05-17 | 2021-07-20 | 中国大唐集团科学技术研究院有限公司中南电力试验研究院 | Pedestrian boundary crossing monitoring method for electric power monitoring |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101159859A (en) * | 2007-11-29 | 2008-04-09 | 北京中星微电子有限公司 | Motion detection method, device and an intelligent monitoring system |
CN103824070A (en) * | 2014-03-24 | 2014-05-28 | 重庆邮电大学 | Rapid pedestrian detection method based on computer vision |
CN104392468A (en) * | 2014-11-21 | 2015-03-04 | 南京理工大学 | Improved visual background extraction based movement target detection method |
CN105225482A (en) * | 2015-09-02 | 2016-01-06 | 上海大学 | Based on vehicle detecting system and the method for binocular stereo vision |
CN105335934A (en) * | 2014-06-06 | 2016-02-17 | 株式会社理光 | Disparity map calculating method and apparatus |
CN105894534A (en) * | 2016-03-25 | 2016-08-24 | 中国传媒大学 | ViBe-based improved moving target detection method |
CN106203429A (en) * | 2016-07-06 | 2016-12-07 | 西北工业大学 | Based on the shelter target detection method under binocular stereo vision complex background |
-
2017
- 2017-12-22 CN CN201711400664.7A patent/CN108038866A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101159859A (en) * | 2007-11-29 | 2008-04-09 | 北京中星微电子有限公司 | Motion detection method, device and an intelligent monitoring system |
CN103824070A (en) * | 2014-03-24 | 2014-05-28 | 重庆邮电大学 | Rapid pedestrian detection method based on computer vision |
CN105335934A (en) * | 2014-06-06 | 2016-02-17 | 株式会社理光 | Disparity map calculating method and apparatus |
CN104392468A (en) * | 2014-11-21 | 2015-03-04 | 南京理工大学 | Improved visual background extraction based movement target detection method |
CN105225482A (en) * | 2015-09-02 | 2016-01-06 | 上海大学 | Based on vehicle detecting system and the method for binocular stereo vision |
CN105894534A (en) * | 2016-03-25 | 2016-08-24 | 中国传媒大学 | ViBe-based improved moving target detection method |
CN106203429A (en) * | 2016-07-06 | 2016-12-07 | 西北工业大学 | Based on the shelter target detection method under binocular stereo vision complex background |
Non-Patent Citations (5)
Title |
---|
杨丽娟等: ""双目立体视觉中的一种运动目标检测算法"", 《北华航天工业学院学报》 * |
杨海林等: ""基于改进的VIBE算法在铁路智能视频监控入侵检测的研究"", 《科学技术与工程》 * |
王哲等: ""一种基于立体视觉的运动目标检测算法"", 《计算机应用》 * |
王辉: ""基于道路监控视频的交通拥堵判别方法研究"", 《中国优秀硕士学位论文全文数据库》 * |
王静静等: ""基于灰度相关的帧间差分和背景差分相融合的实时目标检测"", 《中南大学学报(自然科学版)》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898057A (en) * | 2018-05-25 | 2018-11-27 | 广州杰赛科技股份有限公司 | Track method, apparatus, computer equipment and the storage medium of target detection |
CN109308709A (en) * | 2018-08-14 | 2019-02-05 | 昆山智易知信息科技有限公司 | Vibe moving object detection algorithm based on image segmentation |
CN109684946A (en) * | 2018-12-10 | 2019-04-26 | 成都睿码科技有限责任公司 | A kind of kitchen mouse detection method based on the modeling of single Gaussian Background |
CN110084129A (en) * | 2019-04-01 | 2019-08-02 | 昆明理工大学 | A kind of river drifting substances real-time detection method based on machine vision |
CN110060278A (en) * | 2019-04-22 | 2019-07-26 | 新疆大学 | The detection method and device of moving target based on background subtraction |
CN110060278B (en) * | 2019-04-22 | 2023-05-12 | 新疆大学 | Method and device for detecting moving target based on background subtraction |
CN110111346A (en) * | 2019-05-14 | 2019-08-09 | 西安电子科技大学 | Remote sensing images semantic segmentation method based on parallax information |
CN110580709A (en) * | 2019-07-29 | 2019-12-17 | 浙江工业大学 | Target detection method based on ViBe and three-frame differential fusion |
CN110599523A (en) * | 2019-09-10 | 2019-12-20 | 江南大学 | ViBe ghost suppression method fused with interframe difference method |
CN111814602A (en) * | 2020-06-23 | 2020-10-23 | 成都信息工程大学 | Intelligent vehicle environment dynamic target detection method based on vision |
CN113139521A (en) * | 2021-05-17 | 2021-07-20 | 中国大唐集团科学技术研究院有限公司中南电力试验研究院 | Pedestrian boundary crossing monitoring method for electric power monitoring |
CN113139521B (en) * | 2021-05-17 | 2022-10-11 | 中国大唐集团科学技术研究院有限公司中南电力试验研究院 | Pedestrian boundary crossing monitoring method for electric power monitoring |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108038866A (en) | A kind of moving target detecting method based on Vibe and disparity map Background difference | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN108038867A (en) | Fire defector and localization method based on multiple features fusion and stereoscopic vision | |
CN104392468B (en) | Based on the moving target detecting method for improving visual background extraction | |
CN106127148B (en) | A kind of escalator passenger's anomaly detection method based on machine vision | |
CN105023010B (en) | A kind of human face in-vivo detection method and system | |
CN103093198B (en) | A kind of crowd density monitoring method and device | |
CN105426827B (en) | Living body verification method, device and system | |
CN108346160A (en) | The multiple mobile object tracking combined based on disparity map Background difference and Meanshift | |
CN102307274B (en) | Motion detection method based on edge detection and frame difference | |
CN108710865A (en) | A kind of driver's anomaly detection method based on neural network | |
CN105046206B (en) | Based on the pedestrian detection method and device for moving prior information in video | |
CN106548488B (en) | A kind of foreground detection method based on background model and inter-frame difference | |
CN103679749A (en) | Moving target tracking based image processing method and device | |
CN109670396A (en) | A kind of interior Falls Among Old People detection method | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN106373143A (en) | Adaptive method and system | |
CN107833242A (en) | One kind is based on marginal information and improves VIBE moving target detecting methods | |
CN103413120A (en) | Tracking method based on integral and partial recognition of object | |
CN105760846A (en) | Object detection and location method and system based on depth data | |
CN109598684A (en) | In conjunction with the correlation filtering tracking of twin network | |
CN108074234A (en) | A kind of large space flame detecting method based on target following and multiple features fusion | |
CN106296744A (en) | A kind of combining adaptive model and the moving target detecting method of many shading attributes | |
CN106204586A (en) | A kind of based on the moving target detecting method under the complex scene followed the tracks of | |
CN106295657A (en) | A kind of method extracting human height's feature during video data structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180515 |