CN108038866A - A kind of moving target detecting method based on Vibe and disparity map Background difference - Google Patents
A kind of moving target detecting method based on Vibe and disparity map Background difference Download PDFInfo
- Publication number
- CN108038866A CN108038866A CN201711400664.7A CN201711400664A CN108038866A CN 108038866 A CN108038866 A CN 108038866A CN 201711400664 A CN201711400664 A CN 201711400664A CN 108038866 A CN108038866 A CN 108038866A
- Authority
- CN
- China
- Prior art keywords
- mrow
- background
- pixel
- msub
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000001514 detection method Methods 0.000 claims abstract description 58
- 230000011218 segmentation Effects 0.000 claims description 6
- 230000033001 locomotion Effects 0.000 abstract description 12
- 238000005286 illumination Methods 0.000 abstract description 6
- 230000000694 effects Effects 0.000 abstract description 2
- 230000009012 visual motion Effects 0.000 abstract description 2
- 239000000284 extract Substances 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011410 subtraction method Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20228—Disparity calculation for image-based rendering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of moving target detecting method based on Vibe and disparity map background method, it is related to computer vision field.This method establishes Gauss model according to parallax graphic sequence first, and moving object detection is carried out using disparity map Background difference;The moving object detection based on improved Vibe algorithms is used again;Finally the result of two moving object detections is carried out obtaining last motion target area with operation, continue to update background model, the moving object detection of next frame can be carried out.Present invention incorporates the object detection method based on monocular vision and the object detection method based on binocular vision, extractable complete fortune work(target, the problem of illumination and shadow effect are easily subject to based on monocular vision motion target detection is improved, while ghost phenomenon can be eliminated.
Description
Technical Field
The invention relates to the field of computer vision, in particular to a moving target detection method based on a Vibe and disparity map background difference method.
Background
Moving object detection is the basis for achieving object recognition and tracking. The method can quickly and accurately detect the moving target, is beneficial to the follow-up work of target tracking, identification, behavior understanding and the like, and has wide application in the aspects of iris, face identification, safety monitoring, robot navigation, airplane and satellite monitoring systems and the like.
The moving object detection algorithm includes an optical flow method, an interframe difference method and a background difference method. The optical flow method needs special hardware support, and has complex calculation and large calculation amount, and is generally used less. The interframe difference method is simple in principle and insensitive to noise and light change, but the target detection result is easy to generate a void phenomenon. The background subtraction method can extract complete information of a target, but is easily affected by dynamic changes of an external scene such as illumination. When a moving target changes slowly and moves fast, a background difference method easily detects a background exposed area (namely, the current background still leaves moving target information of the previous frame, but the moving target is not in the area at the moment) as a foreground, a shadow phenomenon occurs, the noise processing effect in complex scenes with tree-branch swinging and the like is poor, the adaptability to the environment is poor, and the false detection causes difficulty in subsequent target tracking. The traditional moving object detection method based on monocular vision can detect the outline of a moving object, but is easily influenced by external conditions, and can detect shadow and partial background as a foreground.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a moving target detection method based on a Vibe and disparity map background difference method. The method is a binocular vision motion detection method, can extract a complete moving target, eliminates ghost images in the motion detection process, and improves the influence of illumination and shadow easily caused by monocular vision-based motion detection by adopting a disparity map-based background difference method.
In order to achieve the purpose, the technical scheme of the invention specifically comprises the following steps:
s1, under a parallel binocular stereo vision system, a left camera and a right camera are adopted to collect images, and motion foreground detection based on a parallax image background difference method is carried out;
s2, solving the left image of the last frame of all the left images of the disparity map by using the step S1.1 to establish a Vibe background model, and extracting a moving foreground target by using an improved Vibe algorithm;
and S3, performing AND operation on the results of the steps S1.2 and S2.2 to obtain a moving object detection result, updating the background model, and continuing the moving object detection of a new image frame.
Further, as a preferred embodiment of the present invention, S1 includes:
s1.1, aiming at left and right image sequences acquired by left and right cameras, obtaining a disparity map of a left and right image pair acquired at the same moment, and establishing an initial background model by using the disparity map;
s1.2, collecting a left image and a right image of a next frame, solving a disparity map of the left image and the right image, and detecting a foreground target by using a disparity map background difference method.
Further, as a preferred embodiment of the present invention, S1.1 includes: obtaining left image f acquired at the same moment by using census stereo matching methodl,i(1. ltoreq. i. ltoreq.n) and right image fr,i(1. ltoreq. i. ltoreq.n) parallax map Bi(i is more than or equal to 1 and less than or equal to n) to obtain a background parallax map sequence B1,B2,...BnEstablishing a single Gaussian statistical background model by using the background disparity map sequence; mean value mu of pixel points (x, y) in background parallax image0(x, y) and varianceRespectively as follows:
wherein, Bi(x, y) isParallax map BiThe disparity value at pixel (x, y).
Further, as a preferred embodiment of the present invention, the step S1.2 includes: suppose that the left and right images collected at any time t are respectively fl,tAnd fr,tObtaining a disparity map B by using a census stereo matching algorithmtAnd detecting the foreground target by using a disparity map background difference method, wherein the detection formula is as follows:
in the above formula, Dt(x, y) is a detection result of the pixel point (x, y) at the time t, 1 represents that the pixel point (x, y) is a foreground point, and 0 represents that the pixel point (x, y) is a background point; b ist(x, y) is the parallax value of the background parallax image at the pixel point (x, y) at the time t; mu.st(x, y) is the mean of the gaussian models of the pixel points (x, y); sigmat(x, y) is the standard deviation of the Gaussian model of the pixel point (x, y); if the current frame is the first frame image after the initial model is established, then mut(x, y) is μ0(x,y);σt(x, y) is σ0(x,y)。
Further, as a preferred embodiment of the present invention, S2 includes:
s2.1, solving the left image of the last frame of all the left images of the disparity map by using the step S1.1 to establish a Vibe background model;
and S2.2, starting from the left image of the next frame, detecting a moving foreground target and eliminating ghosting.
Further, as a preferred technical solution of the present invention, the detecting a moving foreground object includes:
starting from the second frame, detecting the moving object, and creating a two-dimensional Euclidean chromaticity space region S by taking the pixel point x at the pixel value v (x) as the center of a circle and R as the radiusR(v (x)), the region SR(v (x)) includes the number of background sample values of pixel x #{ SR(v(x))∩{v1,v2,...,vN}};
Wherein,
in the above formula, k is the number of pixel values in the background model compared with the pixel p, v (p) is the pixel value at the position of the pixel p in the current frame, viIs the pixel value of the pixel p background model;
setting a threshold #minIf # { SR(v(x))∩{v1,v2,...,vNIs greater than or equal to a threshold #minIf yes, in the current frame, the pixel is a background pixel; if # { SR(v(x))∩{v1,v2,...,vNIs less than a threshold #)minThen the pixel is a foreground pixel.
Further, as a preferred embodiment of the present invention, the removing ghost includes:
(1) calculating the optimal segmentation threshold of the current frame;
assuming that the gray level of the current image frame is L, the gray range is [0, L-1], and the segmentation threshold is t, the image can be divided into an area a with the gray level of [0, t ] and an area B with the gray level of [ t +1, L-1], where A, B represents the foreground and the background, respectively;
the between-class variance is:
σ2=ω0(μ0-μ)2+ω1(μ0-μ1)2=ω0ω1(μ0-μ1)2
wherein, ω is0Representing the ratio of the number of foreground pixel points to the whole image, the average gray value of the foreground pixels is mu0,ω1Representing the ratio of the number of background pixel points to the whole image, the average gray value of the background pixels is mu1The average gray value of the whole image isμ;
When sigma is2The gray value when the maximum value is obtained is the optimal threshold value:
(2) carrying out secondary discrimination on the moving target pixel points;
randomly selecting M background pixel points obtained by detection, and calculating the average value of the gray levels of the M pixel points asAssuming that f (x) is the detected foreground pixel, the determination rule is:
if it is notWhen f (x) > t*If yes, f (x) judges the foreground again; when f (x) is less than or equal to t*If so, f (x) is judged as the background again;
if it is notWhen f (x) < t*If yes, f (x) judges the foreground again; when f (x) is not less than t*If so, f (x) is judged as background again.
Further, as a preferred embodiment of the present invention, in the step S3, the update background technology includes updating a parallax background model and updating a Vibe background model.
Compared with the prior art, the invention has the following beneficial effects:
1) the moving target detection based on the parallax image background difference method is not influenced by illumination change, can extract a complete moving target, and can eliminate the influence of a shadow area on the moving detection.
2) The invention utilizes the improved Vibe algorithm to extract more accurate motion region improved algorithm, and utilizes the pixel-level judgment characteristic of the Vibe algorithm and the Otsu algorithm to eliminate the ghost image in the motion detection process aiming at the overall characteristic of the image.
3) The invention combines the moving target detection based on the Vibe algorithm in the monocular vision and the target detection based on the background difference method of the parallax image in the binocular vision, extracts the complete moving target, effectively avoids the influence of illumination and shadow in the target detection process, and eliminates the ghost phenomenon.
Drawings
Fig. 1 is a flowchart of a moving object detection method in the present embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The specific operation flow of the moving target detection method based on the Vibe and disparity map background difference method of the present invention is shown in fig. 1, and mainly includes the following two major steps S1-S3, which are described in detail below as steps S1-S3:
s1, under a parallel binocular stereo vision system, a left camera and a right camera are adopted to collect images, and motion foreground detection based on a parallax image background difference method is carried out;
the traditional monocular vision-based motion foreground detection is easily influenced by light change, a shadow part is taken as a motion foreground, and the sudden change of light cannot influence the acquisition of a disparity map, so that the left camera and the right camera are adopted to synchronously acquire images, an initial background model is established by utilizing the disparity map, and S1 specifically comprises the following steps;
s1.1, aiming at left and right image sequences acquired by left and right cameras, obtaining a disparity map of a left and right image pair acquired at the same moment, and establishing an initial background model by using the disparity map;
suppose that the left camera acquires the left image sequence as: f. ofl,1,fl,2,...fl,nAnd the right image sequence collected by the right camera corresponding to the left image sequence is as follows: f. ofr,1,fr,2,...fr,nThen, the left image f collected at the same time is obtained by using a census stereo matching methodl,i(1. ltoreq. i. ltoreq.n) and right image fr,i(1. ltoreq. i. ltoreq.n) parallax map Bi(i is more than or equal to 1 and less than or equal to n) to obtain a background parallax map sequence B1,B2,...BnAnd a single-Gaussian statistical background model is established by utilizing the background parallax image sequence, the establishment of the dynamic single-Gaussian statistical background model can better overcome the influence of external environment change on target detection, and the mean value mu of pixel points (x, y) in the background parallax image0(x, y) and varianceRespectively as follows:
wherein, Bi(x, y) is a parallax map BiThe disparity value at pixel (x, y).
S1.2, collecting a left image and a right image of a next frame, solving a disparity map of the left image and the right image, and detecting a foreground target by using a disparity map background difference method;
suppose that the left and right images collected at any time t are respectively fl,tAnd fr,tObtaining a disparity map B by using a census stereo matching algorithmtAnd detecting the foreground target by using a disparity map background difference method, wherein the detection formula is as follows:
in the above formula, Dt(x, y) is a detection result of the pixel point (x, y) at the time t, 1 represents that the pixel point (x, y) is a foreground point, and 0 represents that the pixel point (x, y) is a background point; b ist(x, y) is the parallax value of the background parallax image at the pixel point (x, y) at the time t; mu.st(x, y) is the mean of the gaussian models of the pixel points (x, y); sigmat(x, y) is the standard deviation of the Gaussian model for pixel (x, y). If the current frame is the first frame image after the initial model is established, then mut(x, y) is μ0(x,y),σt(x, y) is σ0(x,y)。
S2, solving the left image of the last frame of all the left images of the disparity map by using the step S1.1 to establish a Vibe background model, and extracting a moving foreground target by using an improved Vibe algorithm;
the Vibe algorithm has the advantages of high movement speed and high target extraction accuracy, so the Vibe algorithm is improved and used for extracting the movement foreground target, and the method mainly comprises the following steps:
s2.1, solving the left image of the last frame of all the left images of the disparity map by using the step S1.1 to establish a Vibe background model;
the Vibe algorithm of the invention initializes the last frame of left image of all left images used for establishing the gaussian initial model in the step S1.1, and introduces a neighborhood method to establish a corresponding background set for each pixel point. Defining the background pixel value at the pixel point x as v (x), and randomly selecting N pixel values v in 8 neighborhoods of each pixel point x1,v2,...,vNSetting the background model as M as the background model sample value of pixel point x(x) Then, there are:
M(x)={v1,v2,...,vN} (4)
the Vibe algorithm uses the first frame image to initialize a background model, and randomly selects a pixel value from a sample value pixel and a neighborhood pixel to initialize each sample value in a pixel background sample space. In the first frame image, y value is in 8 neighborhoods N of pixel point xG(x) Randomly selected among the sample points in (v) order0(y) is the pixel value of the first frame image at y, then the initialized background model can be obtained, which can be represented by the following formula:
M0(x)={v0(y)|y∈NG(x)} (5)
wherein M is0(x) Is the initialized background model.
S2.2, starting from the left image of the next frame with the established background model, detecting a moving foreground target and eliminating ghosts;
s2.2.1 classification of background and foreground based on adaptive thresholding Vibe algorithm;
and detecting the moving target from the left image of the next frame after the initial background model is established. A sphere S of a two-dimensional Euclidean chromaticity space is created by taking the pixel point x at the pixel value v (x) as the center of a circle and R as the radiusR(v (x)) for comparing the pixel value of the pixel point x in the new frame image with the background sample value at the point, and classifying the pixel point. When the foreground detection is carried out by the Vibe algorithm, whether a sample value in a background model is matched with a current pixel value is judged, and a fixed radius threshold value R is adopted. When the R setting is large, foreground pixels that are relatively close to the background pixel value will be detected as the background, resulting in the detected moving object not being detected completely. When the R setting is small, an undesirably detected dynamic change part (such as a leaf, a branch, etc.) in the background is detected, resulting in more noise in the detection result.
Therefore, in order to improve the detection accuracy, the method of the invention sets a threshold value R for each pixel according to the specific situation of the pixel, and the setting method of the threshold value R is as follows:
in the above formula, k is the number of pixel values compared with the pixel p in the background model; v (p) is the pixel value at which pixel p is the location in the current frame; v. ofiIs the pixel value of the background model for pixel p.
In order to prevent the threshold value R from being too large and too small to cause inaccurate detection results, the invention sets the upper limit and the lower limit of the threshold value R, specifically sets the threshold value R epsilon [20,40], namely, when the threshold value R obtained by the formula (6) is less than 20, the threshold value R is set to be 20, and when the threshold value R obtained by the formula (6) is more than 40, the threshold value R is set to be 40.
Further, one such area S is definedR(v (x)), the region SR(v (x)) includes the number of background sample values of pixel x #{ SR(v(x))∩{v1,v2,...,vN} to
#{SR(v(x))∩{v1,v2,...,vNThe size of the pixel determines whether the pixel is a foreground pixel or a background pixel. Initialization # { SR(v(x))∩{v1,v2,...,vNIs 0, the threshold for determining whether a pixel is a foreground pixel or a background pixel is set to #minThe value is set to 2. If # { SR(v(x))∩{v1,v2,...,vNIs greater than or equal to a threshold #minIf yes, in the current frame, the pixel is a background pixel; if # { SR(v(x))∩{v1,v2,...,vNIs less than a threshold #)minThen the pixel is a foreground pixel.
S2.2.2 performing secondary judgment to eliminate ghost by combining the foreground detection result and an Otsu threshold method;
ghosting refers to a foreground region that does not correspond to an actual moving object, and is caused by the sudden movement of an original stationary object in the background, which results in the inconsistency between the background model and the actual background. When the object in the background suddenly moves, the original position of the object is replaced by the original covered area of the object, the change is immediately reflected in the next image sequence, and the background model does not immediately reflect the change. Therefore, the background model is invalid for a period of time, which may cause false detection at the original position of the object, and detect a non-existent moving object, thereby causing a ghost phenomenon. Aiming at the ghost problem, the invention adopts a method of combining a foreground detection result and an Otsu threshold value to carry out secondary judgment to inhibit ghosts, and the method mainly comprises the following steps:
(1) calculating the optimal segmentation threshold of the current frame;
assuming that the gray level of the current image frame is L, the gray range is 0, L-1, and the segmentation threshold is t, the image can be divided into an area a with the gray level of 0, t and an area B with the gray level of t +1, L-1, where A, B represents the foreground area and the background area, respectively.
The between-class variance is:
σ2=ω0(μ0-μ)2+ω1(μ0-μ1)2=ω0ω1(μ0-μ1)2(7)
wherein, ω is0Representing the proportion of the number of foreground pixel points in the whole image; the average gray value of the foreground pixel point is mu0;ω1Representing the proportion of the number of background pixel points in the whole image; the average gray value of the background pixel point is mu1(ii) a The average gray value of the whole image is μ. When the inter-class variance is larger, the difference between the two areas is larger, and the image can be better segmented. Therefore, when σ2The gray value when the maximum value is obtained is the optimal threshold, which can be expressed as the following formula:
(2) and carrying out secondary discrimination on the moving target pixel points.
Randomly selecting M background pixel points detected in the step (1), and calculating the average value of the gray levels of the pixel points asAssuming that f (x) is the foreground pixel detected in step (1), the determination rule is:
if it is notWhen f (x) > t*If yes, f (x) judges the foreground again; when f (x) is less than or equal to t*If so, f (x) is judged as background again.
If it is notWhen f (x) < t*If yes, f (x) judges the foreground again; when f (x) is not less than t*If so, f (x) is judged as background again.
And (2) carrying out secondary judgment on the foreground detected in the step (1), filtering out misjudged parts, and judging the ghost part as the background again.
S3, performing AND operation on the results of the steps S1.2 and S2.2 to obtain a moving object detection result, and obtaining a moving object detection result of the image frame; and then updating the background model, and continuing the moving object detection of a new image frame.
S3.1, performing AND operation on results of the step S1.2 and the step S2.2;
s3.2, updating the background model, wherein the updating comprises updating of the parallax background model and updating of the Vibe background model;
s3.2.1 updating the parallax background model;
as time goes on, the background inevitably changes, and for this situation, the invention uses a self-adaptive background updating model to update the background in real time, specifically, when the pixel point (x, y) at time t is determined as a background point by the formula (3) in step S1.2, the parameters of the gaussian model are updated by the following formula:
μt+1(x,y)=(1-α)μt(x,y)+αBt(x,y) (9)
wherein, the background update rate of α is 0.03.
S3.2.2 updating the Vibe background model;
and (3) updating the background model by adopting a background updating method of a Vibe algorithm aiming at the background pixels detected in the step (S2.2).
S3.3 after the background model update, the detection of moving objects in subsequent new image frames continues using the methods in step S1.2 and step S2.2 and step S3.
The method combines a target detection method based on monocular vision and a target detection method based on binocular vision, overcomes the problem that the traditional target detection based on monocular vision is easily influenced by illumination and shadow, and eliminates the ghost phenomenon in the target detection process. The moving target detection method based on the Vibe and disparity map background difference method can be actually embedded into an FPGA (field programmable gate array) to be realized and is applied to a camera with moving target tracking.
It will be clear to a person skilled in the art that the scope of the present invention is not limited to the examples discussed in the foregoing, but that several amendments and modifications thereof are possible without deviating from the scope of the present invention as defined in the attached claims. While the invention has been illustrated and described in detail in the drawings and the description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments.
Claims (6)
1. A moving target detection method based on a Vibe and a disparity map background difference method is characterized by comprising the following steps:
s1, under a parallel binocular stereo vision system, acquiring images by a left camera and a right camera, and detecting a moving foreground based on a parallax image background difference method;
s1.1, aiming at left and right image sequences acquired by left and right cameras, obtaining a disparity map of a left and right image pair acquired at the same moment, and establishing an initial background model by using the disparity map;
s1.2, collecting a left image and a right image of a next frame, solving a disparity map of the left image and the right image, and detecting a foreground target by using a disparity map background difference method;
s2, solving the left image of the last frame of all the left images of the disparity map by using the step S1.1 to establish a Vibe background model, and extracting a moving foreground target by using an improved Vibe algorithm;
s2.1, solving the left image of the last frame of all the left images of the disparity map by using the step S1.1 to establish a Vibe background model;
s2.2, starting from the left image of the next frame, detecting a moving foreground target and eliminating ghosting;
and S3, performing AND operation on the results of the steps S1.2 and S2.2 to obtain a moving object detection result, updating the background model, and continuing the moving object detection of a new image frame.
2. The method for detecting the moving object based on the Vibe and the disparity map background difference method as claimed in claim 1, wherein the S1.1 comprises: obtaining left image f acquired at the same moment by using census stereo matching methodl,i(1. ltoreq. i. ltoreq.n) and right image fr,i(1. ltoreq. i. ltoreq.n) parallax map Bi(i is more than or equal to 1 and less than or equal to n) to obtain a background parallax map sequence B1,B2,...BnEstablishing a single Gaussian statistical background model by using the background disparity map sequence; mean value mu of pixel points (x, y) in background parallax image0(x, y) and varianceRespectively as follows:
<mrow> <msub> <mi>&mu;</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow>
<mrow> <msubsup> <mi>&sigma;</mi> <mn>0</mn> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>&lsqb;</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mrow> <mn>2</mn> </msup> </mrow>
wherein, Bi(x, y) is a parallax map BiThe disparity value at pixel (x, y).
3. The method for detecting a moving object based on the Vibe and the disparity map background difference method as claimed in claim 2, wherein the step S1.2 comprises: suppose that the left and right images collected at any time t are respectively fl,tAnd fr,tObtaining a disparity map B by using a census stereo matching algorithmtAnd detecting the foreground target by using a disparity map background difference method, wherein the detection formula is as follows:
<mrow> <msub> <mi>D</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mo>|</mo> <msub> <mi>B</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>></mo> <mn>2.5</mn> <msub> <mi>&sigma;</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>e</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
in the above formula, Dt(x, y) is a detection result of the pixel point (x, y) at the time t, 1 represents that the pixel point (x, y) is a foreground point, and 0 represents that the pixel point (x, y) is a background point; b ist(x, y) is the parallax value of the background parallax image at the pixel point (x, y) at the time t; mu.st(x, y) is a pixelMean of gaussian model of point (x, y); sigmat(x, y) is the standard deviation of the Gaussian model of the pixel point (x, y); if the current frame is the first frame image after the initial model is established, then mut(x, y) is μ0(x,y);σt(x, y) is σ0(x,y)。
4. The method for detecting the moving object based on the Vibe and the disparity map background difference method as claimed in claim 3, wherein the step 2.2 of detecting the moving foreground object comprises:
detecting the moving object from the second frame to form pixel pointsxCreating a two-dimensional Euclidean chromaticity space region S with the pixel value v (x) as the center and R as the radiusR(v (x)), the region SR(v (x)) includes the number of background sample values of pixel x #{ SR(v(x))∩{v1,v2,...,vN}};
Wherein,
in the above formula, k is the number of pixel values in the background model compared with the pixel p, v (p) is the pixel value at the position of the pixel p in the current frame, viIs the pixel value of the pixel p background model;
setting a threshold #minIf # { SR(v(x))∩{v1,v2,...,vNIs greater than or equal to a threshold #minIf yes, in the current frame, the pixel is a background pixel; if # { SR(v(x))∩{v1,v2,...,vNIs less than a threshold #)minThen the pixel is a foreground pixel.
5. The method for detecting a moving object based on the Vibe and the disparity map background difference method as claimed in claim 4, wherein the removing ghosting in the step 2.2 comprises:
(1) calculating the optimal segmentation threshold of the current frame;
assuming that the gray level of the current image frame is L, the gray range is [0, L-1], and the segmentation threshold is t, the image can be divided into an area a with the gray level of [0, t ] and an area B with the gray level of [ t +1, L-1], where A, B represents the foreground and the background, respectively;
the between-class variance is:
σ2=ω0(μ0-μ)2+ω1(μ0-μ1)2=ω0ω1(μ0-μ1)2
wherein, ω is0Representing the ratio of the number of foreground pixel points to the whole image, the average gray value of the foreground pixels is mu0,ω1Representing the ratio of the number of background pixel points to the whole image, the average gray value of the background pixels is mu1The average gray value of the whole image is mu;
when sigma is2The gray value when the maximum value is obtained is the optimal threshold value:
<mrow> <msup> <mi>t</mi> <mo>*</mo> </msup> <mo>=</mo> <mi>A</mi> <mi>r</mi> <mi>g</mi> <munder> <mrow> <mi>M</mi> <mi>a</mi> <mi>x</mi> </mrow> <mrow> <mn>0</mn> <mo>&le;</mo> <mi>t</mi> <mo>&le;</mo> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </munder> <mo>&lsqb;</mo> <msub> <mi>&omega;</mi> <mn>0</mn> </msub> <msub> <mi>&omega;</mi> <mn>1</mn> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>&mu;</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>&rsqb;</mo> <mo>;</mo> </mrow>
(2) carrying out secondary discrimination on the moving target pixel points;
randomly detecting the background pixel pointsSelecting M pixels, and calculating the average value of the gray levels of the M pixels asAssuming that f (x) is the detected foreground pixel, the determination rule is:
if it is notWhen f (x) > t*If yes, f (x) judges the foreground again; when f (x) is less than or equal to t*If so, f (x) is judged as the background again;
if it is notWhen f (x) < t*If yes, f (x) judges the foreground again; when f (x) is not less than t*If so, f (x) is judged as background again.
6. The method for detecting a moving object based on the Vibe and disparity map background subtraction as claimed in claim 1, wherein in step S3, the updating background technique comprises updating a disparity background model and updating a Vibe background model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711400664.7A CN108038866A (en) | 2017-12-22 | 2017-12-22 | A kind of moving target detecting method based on Vibe and disparity map Background difference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711400664.7A CN108038866A (en) | 2017-12-22 | 2017-12-22 | A kind of moving target detecting method based on Vibe and disparity map Background difference |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108038866A true CN108038866A (en) | 2018-05-15 |
Family
ID=62100269
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711400664.7A Pending CN108038866A (en) | 2017-12-22 | 2017-12-22 | A kind of moving target detecting method based on Vibe and disparity map Background difference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108038866A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898057A (en) * | 2018-05-25 | 2018-11-27 | 广州杰赛科技股份有限公司 | Track method, apparatus, computer equipment and the storage medium of target detection |
CN109308709A (en) * | 2018-08-14 | 2019-02-05 | 昆山智易知信息科技有限公司 | Vibe moving object detection algorithm based on image segmentation |
CN109684946A (en) * | 2018-12-10 | 2019-04-26 | 成都睿码科技有限责任公司 | A kind of kitchen mouse detection method based on the modeling of single Gaussian Background |
CN110060278A (en) * | 2019-04-22 | 2019-07-26 | 新疆大学 | The detection method and device of moving target based on background subtraction |
CN110084129A (en) * | 2019-04-01 | 2019-08-02 | 昆明理工大学 | A kind of river drifting substances real-time detection method based on machine vision |
CN110111346A (en) * | 2019-05-14 | 2019-08-09 | 西安电子科技大学 | Remote sensing images semantic segmentation method based on parallax information |
CN110580709A (en) * | 2019-07-29 | 2019-12-17 | 浙江工业大学 | Target detection method based on ViBe and three-frame differential fusion |
CN110599523A (en) * | 2019-09-10 | 2019-12-20 | 江南大学 | ViBe ghost suppression method fused with interframe difference method |
CN111814602A (en) * | 2020-06-23 | 2020-10-23 | 成都信息工程大学 | Intelligent vehicle environment dynamic target detection method based on vision |
CN113139521A (en) * | 2021-05-17 | 2021-07-20 | 中国大唐集团科学技术研究院有限公司中南电力试验研究院 | Pedestrian boundary crossing monitoring method for electric power monitoring |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101159859A (en) * | 2007-11-29 | 2008-04-09 | 北京中星微电子有限公司 | Motion detection method, device and an intelligent monitoring system |
CN103824070A (en) * | 2014-03-24 | 2014-05-28 | 重庆邮电大学 | A Fast Pedestrian Detection Method Based on Computer Vision |
CN104392468A (en) * | 2014-11-21 | 2015-03-04 | 南京理工大学 | Improved visual background extraction based movement target detection method |
CN105225482A (en) * | 2015-09-02 | 2016-01-06 | 上海大学 | Based on vehicle detecting system and the method for binocular stereo vision |
CN105335934A (en) * | 2014-06-06 | 2016-02-17 | 株式会社理光 | Disparity map calculating method and apparatus |
CN105894534A (en) * | 2016-03-25 | 2016-08-24 | 中国传媒大学 | ViBe-based improved moving target detection method |
CN106203429A (en) * | 2016-07-06 | 2016-12-07 | 西北工业大学 | Based on the shelter target detection method under binocular stereo vision complex background |
-
2017
- 2017-12-22 CN CN201711400664.7A patent/CN108038866A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101159859A (en) * | 2007-11-29 | 2008-04-09 | 北京中星微电子有限公司 | Motion detection method, device and an intelligent monitoring system |
CN103824070A (en) * | 2014-03-24 | 2014-05-28 | 重庆邮电大学 | A Fast Pedestrian Detection Method Based on Computer Vision |
CN105335934A (en) * | 2014-06-06 | 2016-02-17 | 株式会社理光 | Disparity map calculating method and apparatus |
CN104392468A (en) * | 2014-11-21 | 2015-03-04 | 南京理工大学 | Improved visual background extraction based movement target detection method |
CN105225482A (en) * | 2015-09-02 | 2016-01-06 | 上海大学 | Based on vehicle detecting system and the method for binocular stereo vision |
CN105894534A (en) * | 2016-03-25 | 2016-08-24 | 中国传媒大学 | ViBe-based improved moving target detection method |
CN106203429A (en) * | 2016-07-06 | 2016-12-07 | 西北工业大学 | Based on the shelter target detection method under binocular stereo vision complex background |
Non-Patent Citations (5)
Title |
---|
杨丽娟等: ""双目立体视觉中的一种运动目标检测算法"", 《北华航天工业学院学报》 * |
杨海林等: ""基于改进的VIBE算法在铁路智能视频监控入侵检测的研究"", 《科学技术与工程》 * |
王哲等: ""一种基于立体视觉的运动目标检测算法"", 《计算机应用》 * |
王辉: ""基于道路监控视频的交通拥堵判别方法研究"", 《中国优秀硕士学位论文全文数据库》 * |
王静静等: ""基于灰度相关的帧间差分和背景差分相融合的实时目标检测"", 《中南大学学报(自然科学版)》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898057A (en) * | 2018-05-25 | 2018-11-27 | 广州杰赛科技股份有限公司 | Track method, apparatus, computer equipment and the storage medium of target detection |
CN109308709A (en) * | 2018-08-14 | 2019-02-05 | 昆山智易知信息科技有限公司 | Vibe moving object detection algorithm based on image segmentation |
CN109684946A (en) * | 2018-12-10 | 2019-04-26 | 成都睿码科技有限责任公司 | A kind of kitchen mouse detection method based on the modeling of single Gaussian Background |
CN110084129A (en) * | 2019-04-01 | 2019-08-02 | 昆明理工大学 | A kind of river drifting substances real-time detection method based on machine vision |
CN110060278A (en) * | 2019-04-22 | 2019-07-26 | 新疆大学 | The detection method and device of moving target based on background subtraction |
CN110060278B (en) * | 2019-04-22 | 2023-05-12 | 新疆大学 | Method and device for detecting moving target based on background subtraction |
CN110111346A (en) * | 2019-05-14 | 2019-08-09 | 西安电子科技大学 | Remote sensing images semantic segmentation method based on parallax information |
CN110580709A (en) * | 2019-07-29 | 2019-12-17 | 浙江工业大学 | Target detection method based on ViBe and three-frame differential fusion |
CN110599523A (en) * | 2019-09-10 | 2019-12-20 | 江南大学 | ViBe ghost suppression method fused with interframe difference method |
CN111814602A (en) * | 2020-06-23 | 2020-10-23 | 成都信息工程大学 | Intelligent vehicle environment dynamic target detection method based on vision |
CN113139521A (en) * | 2021-05-17 | 2021-07-20 | 中国大唐集团科学技术研究院有限公司中南电力试验研究院 | Pedestrian boundary crossing monitoring method for electric power monitoring |
CN113139521B (en) * | 2021-05-17 | 2022-10-11 | 中国大唐集团科学技术研究院有限公司中南电力试验研究院 | Pedestrian boundary crossing monitoring method for electric power monitoring |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108038866A (en) | A kind of moving target detecting method based on Vibe and disparity map Background difference | |
CN108346160A (en) | The multiple mobile object tracking combined based on disparity map Background difference and Meanshift | |
CN103077539B (en) | Motion target tracking method under a kind of complex background and obstruction conditions | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN103971386B (en) | A kind of foreground detection method under dynamic background scene | |
CN103325112B (en) | Moving target method for quick in dynamic scene | |
CN103093198B (en) | A kind of crowd density monitoring method and device | |
JP6482195B2 (en) | Image recognition apparatus, image recognition method, and program | |
CN108198207A (en) | Multiple mobile object tracking based on improved Vibe models and BP neural network | |
CN109685045B (en) | Moving target video tracking method and system | |
CN108198205A (en) | A kind of method for tracking target based on Vibe and Camshift algorithms | |
EP2340525A1 (en) | Detection of vehicles in an image | |
EP2965262A1 (en) | Method for detecting and tracking objects in sequence of images of scene acquired by stationary camera | |
CN105279772B (en) | A kind of trackability method of discrimination of infrared sequence image | |
CN108416291B (en) | Face detection and recognition method, device and system | |
CN105741319B (en) | Improvement visual background extracting method based on blindly more new strategy and foreground model | |
WO2019057197A1 (en) | Visual tracking method and apparatus for moving target, electronic device and storage medium | |
JP6679858B2 (en) | Method and apparatus for detecting occlusion of an object | |
CN105160649A (en) | Multi-target tracking method and system based on kernel function unsupervised clustering | |
CN109859246B (en) | Low-altitude slow unmanned aerial vehicle tracking method combining correlation filtering and visual saliency | |
CN110675442A (en) | Local stereo matching method and system combined with target identification technology | |
JP2015204030A (en) | Authentication device and authentication method | |
CN106446832B (en) | Video-based pedestrian real-time detection method | |
CN106651921B (en) | Motion detection method and method for avoiding and tracking moving target | |
CN103473753A (en) | Target detection method based on multi-scale wavelet threshold denoising |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180515 |