CN107169995B - Self-adaptive moving target visual detection method - Google Patents
Self-adaptive moving target visual detection method Download PDFInfo
- Publication number
- CN107169995B CN107169995B CN201710313268.4A CN201710313268A CN107169995B CN 107169995 B CN107169995 B CN 107169995B CN 201710313268 A CN201710313268 A CN 201710313268A CN 107169995 B CN107169995 B CN 107169995B
- Authority
- CN
- China
- Prior art keywords
- algorithm
- foreground
- image
- moving target
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a self-adaptive moving target visual detection method, which adopts a parallel detection mode, detects an input image sequence by applying a plurality of moving target detection algorithms and estimates a real foreground image according to a detection result. And by analyzing and comparing the similarity between the foreground estimation image and each detection result, each weight parameter during foreground estimation is self-adapted, and the robust detection of the moving target in a complex environment is realized.
Description
Technical Field
The invention belongs to the technical field of combination of computer vision, video analysis and artificial intelligence, and particularly relates to a self-adaptive moving target visual detection method.
Background
The traditional visual detection method for the moving target comprises the following steps: optical flow, frame differencing, and background differencing. The optical flow method has high detection accuracy, but the optical flow method has complex calculation and poor anti-interference capability, and cannot meet the requirement on real-time processing without specific hardware support. The frame difference method is the simplest and most efficient algorithm of the three types of algorithms, can effectively adapt to the dynamic change of the environment, but is difficult to obtain a complete moving target, is easy to generate a void phenomenon during detection, and has an unsatisfactory detection effect.
The background difference method can not only adapt to dynamic environment, but also detect the complete target shape, and is widely applied to the moving target detection neighborhood. The key point of the background difference method is to train an accurate background model, so that different researchers put forward different training methods to obtain different background difference algorithms. Typical examples include a Gaussian mixture model algorithm, a KDE algorithm, a codebook algorithm, a ViBe algorithm, a SuBSENSE algorithm, an AdaDGS algorithm, and the like.
Although each of the above algorithms can be well adapted to a certain scene and perform well on a specific data set, due to the diversity of moving objects and external environments, it is difficult for a specific algorithm to ensure good detection performance under these different conditions. Although other moving object detection algorithms are proposed in succession, none of them can simultaneously adapt to different environments and achieve good detection results.
Disclosure of Invention
In order to solve the technical problems, the invention provides a self-adaptive moving target detection algorithm, which realizes robust detection of moving targets in different environments.
The technical scheme adopted by the invention is as follows: an adaptive moving object visual detection method is characterized by comprising the following steps:
step 1: inputting a first frame image;
step 2: initializing weights omega of N different moving target detection algorithms at each pixel point xi(x) Where i ∈ [1, N ]],x∈[1,M]M is the number of image pixel points;
and step 3: aiming at N different moving target detection algorithms, detecting an input image sequence to obtain different foreground image observation values { f1,f2,…,fN};
And 4, step 4: according to omegai(x) And foreground image observation value { f1,f2,…,fNEstimate true foreground image estimated value
And 5: estimating the foreground imageAnd { f1,f2,…,fNComparing, updating the weight omega of each algorithm at each pixel pointi(x);
Step 6: if the current image is the last frame, ending the algorithm, otherwise, inputting the next frame of image, and repeating the steps 3-5.
The invention has the beneficial effects that: the invention adopts a plurality of algorithms to simultaneously detect the same image sequence, and effectively fuses the detection results of each algorithm module by a weighted combination mode to estimate a real foreground image. The robust detection of the moving target in the complex environment is realized by the pre-estimation of the overall performance of each algorithm module and the self-adaptation of the weight parameters. In practical application, compared with other moving target detection algorithms, the self-adaptive moving target detection algorithm provided by the invention can obtain a more complete moving target with less noise interference.
Drawings
FIG. 1 is an overall framework diagram of an adaptive moving object detection algorithm according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of physical meanings of parameters in the index FM calculation process according to the embodiment of the invention;
FIG. 3 is a source image captured in an embodiment of the present invention;
FIG. 4 is a diagram illustrating the actual marking of a moving object in a source image according to an embodiment of the present invention;
fig. 5 is a detection result image of the LBAdaSOM algorithm in the embodiment of the present invention;
FIG. 6 is an image of the detection result of the LOBSTER algorithm in an embodiment of the present invention;
FIG. 7 is an image of the detection results of the SuBSENSE algorithm in an embodiment of the present invention;
FIG. 8 is a detection result image of the DPZivGMM algorithm in an embodiment of the present invention;
FIG. 9 is an image of the result of the detection of the ViBe algorithm in an embodiment of the present invention;
fig. 10 is a detection result image of the algorithm of the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, the adaptive visual inspection method for a moving object provided by the present invention includes the following steps:
step 1: inputting a first frame image;
step 2: initializing weights omega of N different moving target detection algorithms at each pixel point xi(x) Where i ∈ [1, N ]],x∈[1,M]M is the number of image pixel points;
ωi(x)=1/N;
in the embodiment, multithreading and distributed parallel processing are adopted, and the N different moving target detection algorithms include a DPZivGMM algorithm, an LBAdaSOM algorithm, a codebook algorithm, a PAWCS algorithm, a FuzzyCho algorithm, a FuzzySug algorithm, a MultiCue algorithm, a LOBSTER algorithm, a ViBe algorithm, and a sussense algorithm.
And step 3: aiming at N different moving target detection algorithms, detecting an input image sequence to obtain different foreground image observation values { f1,f2,…,fN};
And 4, step 4: according to omegai(x) And foreground image observation value { f1,f2,…,fNEstimate true foreground image estimated value
Wherein f isi(x) Represents the observed value, f, of the ith algorithm at the pixel point xi(x)∈{0,1},T1For the first time
A predefined threshold for foreground estimation, and T1∈[0,1];
Step 4.2: observing the N foreground images { f1,f2,…,fNAnd the first foreground image estimateComparing and calculating the overall performance FM of each algorithmi;
Please refer to fig. 2, wherein Pr represents accuracy, Re represents recall, tp (more positive) represents the number of pixels correctly detected as foreground, fn (false negative) represents the number of pixels incorrectly detected as background, fp (false negative) represents the number of pixels incorrectly detected as foreground, and tn (true negative) represents the number of pixels correctly detected as background.
Step 4.3: calculating the overall score FM of the combining algorithmiEach pixel point weight gamma ofi(x);
Step 4.4: according to gammai(x) Second estimating foreground image estimation value The middle foreground pixel point x satisfies:
wherein T is2A predefined threshold for the second foreground estimation, and T2∈[0,1];
And 5: estimating the foreground imageAnd { f1,f2,…,fNComparing, updating the weight omega of each algorithm at each pixel pointi(x);
1)ωi(x) The updated calculation formula of (2) is as follows:
ωi,t(x)=(1-α)ωi,t-1(x)+αMi,t;
wherein ω isi,t(x) The weight of the ith algorithm module at the point x at the moment t is α, the learning rate of the model is α E [0,1 ]],Mi,tFor a matching factor, if fi(x) Andmatch, Mi,t1 is ═ 1; otherwise, Mi,t=0。
Step 6: if the current image is the last frame, ending the algorithm, otherwise, inputting the next frame of image, and repeating the steps 3-5.
The embodiment is based on a Microsoft Visual Studio 2013 platform, and is developed by utilizing an Opencv computer vision library. The algorithm used for self-adaptation is 10, namely a DPZivGMM algorithm, an LBAdaSOM algorithm, a codebook algorithm, a PAWCS algorithm, a fuzzy Cho algorithm, a fuzzy Sug algorithm, a MultiCue algorithm, a LOBSTER algorithm, a ViBe algorithm and a SuBSENSE algorithm respectively, and a threshold T in the self-adaptation process1And T2Are all set to 0.4.
Fig. 3 is a source image acquired in an embodiment of the present invention, fig. 4 is a real mark of a moving target of the source image in the embodiment of the present invention, fig. 5 is a detection result image of an LBAdaSOM algorithm in the embodiment of the present invention, fig. 6 is a detection result image of a LOBSTER algorithm in the embodiment of the present invention, fig. 7 is a detection result image of a substense algorithm in the embodiment of the present invention, fig. 8 is a detection result image of a DPZivGMM algorithm in the embodiment of the present invention, fig. 9 is a detection result image of a ViBe algorithm in the embodiment of the present invention, and fig. 10 is a detection result image of an algorithm in the embodiment of the present.
The LBAdaSOM and DPZivGMM algorithms have severe noise when detecting the image sequences 1 and 3. LOBSTER lost moving objects when detecting sequences 2 and 3 and missed detection was severe. The SuBSENSE algorithm also loses moving objects when detecting sequence 3 and false detection is severe when detecting sequence 1. The ViBe algorithm still has a certain problem of missing detection when detecting the sequences 1 and 2. The algorithm of the embodiment has ideal detection results for 3 image sequences, and can effectively inhibit noise interference while obtaining a complete moving target.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (4)
1. An adaptive moving object visual detection method is characterized by comprising the following steps:
step 1: inputting a first frame image;
step 2: initializing weights omega of N different moving target detection algorithms at each pixel point xi(x) Where i ∈ [1, N ]],x∈[1,M]M is the number of image pixel points;
and step 3: aiming at N different moving target detection algorithms, detecting an input image sequence to obtain different foreground image observation values { f1,f2,…,fN};
And 4, step 4: according to omegai(x) And foreground image observation value { f1,f2,…,fNEstimate true foreground image estimated value
The specific implementation of the step 4 comprises the following substeps:
Wherein f isi(x) Indicating that the ith algorithm is at a pixel pointObserved value at x, fi(x)∈{0,1},T1A predefined threshold value for the first foreground estimation, and T1∈[0,1];
Step 4.2: observing the N foreground images { f1,f2,…,fNAnd the first foreground image estimateComparing the obtained values to calculate the overall score FM of each algorithmi;
Wherein Pr represents accuracy, Re represents recall, TP represents the number of pixels correctly detected as foreground, FN represents the number of pixels erroneously detected as background, FP represents the number of pixels erroneously detected as foreground;
step 4.3: calculating the overall score FM of the combining algorithmiEach pixel point weight gamma ofi(x);
Step 4.4: according to gammai(x) Second estimating foreground image estimation value The middle foreground pixel point x satisfies:
wherein T is2A predefined threshold for the second foreground estimation, and T2∈[0,1];
And 5: estimating the foreground imageAnd { f1,f2,…,fNComparing, updating the weight omega of each algorithm at each pixel pointi(x);
Step 6: if the current image is the last frame, ending the algorithm, otherwise, inputting the next frame of image, and repeating the steps 3-5.
2. The adaptive moving object visual inspection method of claim 1, wherein: the N different moving object detection algorithms in step 2 include a DPZivGMM algorithm, an LBAdaSOM algorithm, a codebook algorithm, a PAWCS algorithm, a FuzzyCho algorithm, a FuzzySug algorithm, a MultiCue algorithm, a LOBSTER algorithm, a ViBe algorithm, and a substense algorithm.
3. The adaptive moving object visual inspection method of claim 1, wherein: ω in step 2i(x)=1/N。
4. The adaptive visual inspection method for moving objects according to claim 1, wherein in step 5, the weight ω is set toi(x) The updated calculation formula of (2) is as follows:
ωi,t(x)=(1-α)ωi,t-1(x)+αMi,t;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710313268.4A CN107169995B (en) | 2017-05-05 | 2017-05-05 | Self-adaptive moving target visual detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710313268.4A CN107169995B (en) | 2017-05-05 | 2017-05-05 | Self-adaptive moving target visual detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107169995A CN107169995A (en) | 2017-09-15 |
CN107169995B true CN107169995B (en) | 2020-03-10 |
Family
ID=59812781
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710313268.4A Active CN107169995B (en) | 2017-05-05 | 2017-05-05 | Self-adaptive moving target visual detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107169995B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101673404A (en) * | 2009-10-19 | 2010-03-17 | 北京中星微电子有限公司 | Target detection method and device |
US9454819B1 (en) * | 2015-06-03 | 2016-09-27 | The United States Of America As Represented By The Secretary Of The Air Force | System and method for static and moving object detection |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9191643B2 (en) * | 2013-04-15 | 2015-11-17 | Microsoft Technology Licensing, Llc | Mixing infrared and color component data point clouds |
-
2017
- 2017-05-05 CN CN201710313268.4A patent/CN107169995B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101673404A (en) * | 2009-10-19 | 2010-03-17 | 北京中星微电子有限公司 | Target detection method and device |
US9454819B1 (en) * | 2015-06-03 | 2016-09-27 | The United States Of America As Represented By The Secretary Of The Air Force | System and method for static and moving object detection |
Non-Patent Citations (2)
Title |
---|
Multi-sensor background subtraction by fusing multiple region-based probabilistic classifiers;Massimo Camplani et.al;《Pattern Recognition Letters》;20141231;第24-28页 * |
基于DSP的红外小目标检测技术研究;王洋;《万方数据知识服务平台》;20130627;第27-35页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107169995A (en) | 2017-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109544592B (en) | Moving object detection algorithm for camera movement | |
CN111627044B (en) | Target tracking attack and defense method based on deep network | |
CN105913028B (en) | Face + + platform-based face tracking method and device | |
CN110120064B (en) | Depth-related target tracking algorithm based on mutual reinforcement and multi-attention mechanism learning | |
CN110728697A (en) | Infrared dim target detection tracking method based on convolutional neural network | |
CN110555868A (en) | method for detecting small moving target under complex ground background | |
CN105374049B (en) | Multi-corner point tracking method and device based on sparse optical flow method | |
JP2016015045A (en) | Image recognition device, image recognition method, and program | |
CN110827321B (en) | Multi-camera collaborative active target tracking method based on three-dimensional information | |
CN107368802B (en) | Moving target tracking method based on KCF and human brain memory mechanism | |
CN110766782A (en) | Large-scale construction scene real-time reconstruction method based on multi-unmanned aerial vehicle visual cooperation | |
CN110827262A (en) | Weak and small target detection method based on continuous limited frame infrared image | |
CN114419246A (en) | Space target instant dense reconstruction method | |
CN112233145A (en) | Multi-target shielding tracking method based on RGB-D space-time context model | |
Guo et al. | DeblurSLAM: A novel visual SLAM system robust in blurring scene | |
CN108876807B (en) | Real-time satellite-borne satellite image moving object detection tracking method | |
CN113516713A (en) | Unmanned aerial vehicle self-adaptive target tracking method based on pseudo twin network | |
CN111531546B (en) | Robot pose estimation method, device, equipment and storage medium | |
CN108765451A (en) | A kind of movement of traffic object detection method of adaptive RTS threshold adjustment | |
CN110458867B (en) | Target tracking method based on attention circulation network | |
CN110502968B (en) | Method for detecting infrared small and weak moving target based on track point space-time consistency | |
CN110136164B (en) | Method for removing dynamic background based on online transmission transformation and low-rank sparse matrix decomposition | |
CN107169995B (en) | Self-adaptive moving target visual detection method | |
CN108038872B (en) | Dynamic and static target detection and real-time compressed sensing tracking research method | |
CN108198204A (en) | A kind of Density Estimator moving target detecting method of zero threshold value |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |