CN115564805A - Moving target detection method - Google Patents

Moving target detection method Download PDF

Info

Publication number
CN115564805A
CN115564805A CN202211236531.1A CN202211236531A CN115564805A CN 115564805 A CN115564805 A CN 115564805A CN 202211236531 A CN202211236531 A CN 202211236531A CN 115564805 A CN115564805 A CN 115564805A
Authority
CN
China
Prior art keywords
image
follows
background model
value
cur
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211236531.1A
Other languages
Chinese (zh)
Inventor
曾钦勇
刘胜杰
尹小杰
李双龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Haofu Technology Co ltd
Original Assignee
Chengdu Haofu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Haofu Technology Co ltd filed Critical Chengdu Haofu Technology Co ltd
Priority to CN202211236531.1A priority Critical patent/CN115564805A/en
Publication of CN115564805A publication Critical patent/CN115564805A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The invention discloses a moving target detection method, which is characterized in that an image is partitioned to establish a background model, the motion of the background is solved by utilizing a sparse optical flow method, and the motion compensation is carried out on the background model according to the motion. And taking the detection result of the previous frame and the movement speed of the foreground as a background model update mask to accelerate the convergence speed of the background model. And (4) segmenting the target at the corresponding position of the detection result by adopting a significance segmentation method to obtain a complete target. The invention can quickly and completely detect the moving target in the scene of camera motion.

Description

Moving target detection method
Technical Field
The invention belongs to the field of computer vision moving target detection, and particularly relates to a moving target detection method.
Background
Common moving object detection methods include an optical flow method, a frame difference method, and a background difference method. The optical flow method is suitable for the situation that a camera is also in motion, but the calculated amount of the dense optical flow method is too large, and the real-time calculation requirement is difficult to meet on an embedded device; the frame difference method and the background difference method cannot be applied to the condition of camera motion, and have the problems that detection results are easy to generate holes, detection targets are incomplete and the like.
Disclosure of Invention
The invention aims to provide a moving target detection method, which can quickly and accurately detect a moving target in a scene of camera motion.
In order to achieve the purpose, the invention adopts the following technical scheme:
in order to detect a moving object even in a scene in which a camera is moving, it is necessary to determine the motion of the background, and in order to reduce the amount of calculation, it is necessary to determine the motion of the background by using a sparse optical flow method. In order to ensure that the motion quantity of the background can be accurately obtained from the optical flow result, the image is divided into uniform grids, and characteristic points are selected from each grid to be used for obtaining the inter-frame optical flow, so that the finally obtained optical flow result is uniformly distributed and is not concentrated on the target foreground. In order to accelerate the convergence speed of the background model and increase the stability of the detection result, the foreground result and the movement speed of the foreground of the previous frame are added as an update mask when the background model is updated, that is, the position of the target of the previous frame is not updated, and the previous model data is kept. In order to ensure the integrity of the detected target, the target is segmented by adopting a significance segmentation method at the position of the primarily screened target, and finally the complete target is obtained.
Specifically, the moving object detection method is specifically realized by the following steps:
converting an input image into a gray image, and performing median filtering;
step two, if the frame image is the first frame image, the background model is initialized by using the first frame image, and if the frame image is not the first frame, the method jumps to the step three:
(1) Performing block operation on the image, wherein the size of each block is delta B multiplied by delta B;
(2) Calculating the gray level mean value M and the variance V of each image block in the following specific calculation mode:
Figure BDA0003883182710000021
wherein M is i Mean value of gray scale, V, representing the ith image block i Representing the variance of the ith image block, I representing the gray value of the imageΔ B represents the size of the block, B i Representing a collection of pixels in the ith image block, j representing image block B i Inner pixel number, V min Represents the minimum value of the set variance (a value greater than 0);
(3) Initializing a size of
Figure BDA0003883182710000022
All 1 arrays T of f As the target distribution of the previous frame, where width and height represent the width and height of the image, respectively;
calculating the displacement of the background and the displacement of the foreground of the current frame image and the previous frame image and solving an affine transformation matrix of the background between the two frame images:
(1) Grid division is carried out on two frames of images according to length-width intervals g, and an angular point with the most obvious characteristic is selected in each grid of the previous frame of image to be used as a point set P to be solved for the interframe displacement pre
(2) Finding P in the previous step in the current frame image by using KLT pre Corresponding point P of each point cur
(3) Calculating an affine transformation matrix H of the two groups of corresponding point sets by using RANSAC;
(4) Taking the parameters related to the horizontal displacement and the vertical displacement in the affine transformation matrix obtained in the step (3) as the displacement amounts of the inter-frame background in the horizontal direction and the vertical direction respectively, and recording the displacement amounts as the displacement amounts of the inter-frame background in the horizontal direction and the vertical direction respectively
Figure BDA0003883182710000023
Then, combining the feature point corresponding relation between the two frames obtained in the step (2), obtaining the displacement of the foreground in each image block
Figure BDA0003883182710000024
And obtaining a velocity distribution map S of the foreground f The specific calculation process is as follows:
Figure BDA0003883182710000025
and step four, performing motion compensation on the existing background model by using the affine transformation matrix obtained in the step three:
(1) Calculating the coordinate point of the current frame background model corresponding to the coordinate point in the previous frame background model by using an affine transformation matrix, wherein the specific calculation formula is as follows:
X=H -1 X
X pre =H-1X cur
wherein X pre And X cur Respectively representing the coordinates in the previous frame and the current background model, and H represents an affine transformation matrix;
(2) According to the obtained coordinate corresponding relation, a background model after motion compensation is obtained by utilizing bilinear interpolation, 0 is filled in the value of a point beyond the range of the background model of the previous frame, and the calculation process of the value in the effective range is as follows:
Figure BDA0003883182710000031
wherein M is warp And V warp Respectively, the mean and variance after motion compensation, (x) cur ,y cur ) Coordinate points representing the current frame model, (x) pre ,y pre ) Representing the coordinate points of the previous frame of the model. When the coordinate point obtained in the step four (1) satisfies
Figure BDA0003883182710000032
Calculating according to the formula given above, otherwise, the value is 0, wherein width and height respectively represent the width and height of the image;
step five, updating the background model:
(1) Calculating the current frame image according to the methods (1) and (2) in the step one to obtain the block mean value M of the current frame image cur Sum variance V cur
(2) Combining previous frame target distribution T f And foreground velocity profile S f Calculating the update proportionality coefficient rate of the background model, wherein the specific calculation process is as follows:
Figure BDA0003883182710000033
wherein S f ∩T f A region indicating that neither the target distribution nor the foreground velocity distribution has a value of 0;
(3) Handle M cur 、V cur And M obtained in step four warp 、V warp Summing according to the obtained proportionality coefficients respectively, wherein the specific calculation process is as follows:
Figure BDA0003883182710000041
Figure BDA0003883182710000042
wherein M is update And V update Respectively representing the updated model mean and variance, i represents the ith image block, and rate represents the set proportional coefficient;
step six, dividing a target area:
(1) Calculating to obtain a foreground response curved surface F according to the gray value of the current frame image and the updated background model map The specific calculation process is as follows:
Figure BDA0003883182710000043
j represents the jth pixel point in the image, I represents the gray value, and I represents the image block number corresponding to the pixel point j;
(2) Will respond to the curved surface F map And judging the part of the median value larger than the threshold value as the foreground, and specifically operating as follows:
Figure BDA0003883182710000044
wherein t represents a set threshold;
step seven, screening and dividing the target area:
(1) Performing connected domain analysis on the foreground image F obtained in the sixth step, and eliminating the regions with too small size and not conforming to the target characteristics of the application scene;
(2) In the current frame image, utilizing the significance to divide the position corresponding to the centroid of the target area obtained in the previous step to obtain the final position and size of the target;
(3) Obtaining target distribution T from the final target image according to a method for calculating the mean value of the background model f
According to the invention, the target area and the template area are divided, and the change condition of the correlation coefficient of each part is utilized, so that partial shielding and complete shielding can be rapidly and effectively detected, and false alarms can be effectively inhibited.
Drawings
FIG. 1 is a flow chart of moving object detection;
FIG. 2 is a background model;
FIG. 3 is a diagram of the obtained target foreground result;
fig. 4 shows the final moving object detection result.
Detailed Description
The present invention will be described in detail with reference to fig. 1 to 4, and the technical solutions in the embodiments of the present invention will be clearly and completely described below. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A moving object detection method comprises the following steps:
converting an input image into a gray image, and performing median filtering;
step two, if the frame is the first frame image, the background model is initialized by using the first frame image, and if the frame is not the first frame, the step three is skipped:
(1) Performing block operation on the image, wherein the size of each block is delta B multiplied by delta B;
(2) Calculating the gray level mean value M and the variance V of each image block in the following specific calculation mode:
Figure BDA0003883182710000051
wherein M is i Mean value of gray scale, V, representing the ith image block i Represents the variance of the ith image block, I represents the gray value of the image, Δ B represents the size of the block, B i Representing a collection of pixels in the ith image block, j representing image block B i Inner pixel number, V min Represents the minimum value of the set variance (a value greater than 0);
(3) Initializing a size of
Figure BDA0003883182710000052
All 1 arrays T of f As the target distribution of the previous frame, where width and height represent the width and height of the image, respectively;
calculating the displacement of the background and the displacement of the foreground of the current frame and the previous frame of image and obtaining an affine transformation matrix of the background between the two frames of images:
(1) Grid division is carried out on two frames of images according to length-width intervals g, and an angular point with the most obvious characteristic is selected in each grid of the previous frame of image to be used as a point set P to be solved for the interframe displacement pre
(2) Finding P in the previous step in the current frame image by using KLT pre Corresponding point P of each point cur
(3) Calculating an affine transformation matrix H of the two groups of corresponding point sets by using RANSAC;
(4) Taking the parameters related to the horizontal displacement and the vertical displacement in the affine transformation matrix obtained in the step (3) as the displacement amounts of the interframe background in the horizontal direction and the vertical direction respectively, and recording the displacement amounts as the displacement amounts of the interframe background in the horizontal direction and the vertical direction respectively
Figure BDA0003883182710000061
Then combining the feature point correspondence between the two frames obtained in the step (2)Relation, finding the displacement of foreground in each image block
Figure BDA0003883182710000062
And obtaining a velocity distribution map S of the foreground f The specific calculation process is as follows:
Figure BDA0003883182710000063
step four, performing motion compensation on the existing background model by using the affine transformation matrix obtained in the step three:
(1) Calculating the coordinate point corresponding to the coordinate point of the current frame background model in the previous frame background model by using an affine transformation matrix, wherein the specific calculation formula is as follows:
X pre =H -1 X cur
wherein X pre And X cur Respectively representing the coordinates in the previous frame and the current background model, and H represents an affine transformation matrix;
(2) According to the obtained coordinate corresponding relation, a background model after motion compensation is obtained by utilizing bilinear interpolation, 0 is filled in the value of a point beyond the range of the background model of the previous frame, and the calculation process of the value in the effective range is as follows:
Figure BDA0003883182710000064
wherein M is warp And V warp Respectively, the mean and variance after motion compensation, (x) cur ,y cur ) Coordinate points representing the current frame model, (x) pre ,y pre ) Representing the coordinate points of the previous frame of the model. When the coordinate point obtained in the step four (1) meets the requirement
Figure BDA0003883182710000065
Calculating according to the formula given above, otherwise, the value is 0, wherein width and height respectively represent the width and height of the image;
step five, updating the background model:
(1) Calculating the current frame image according to the methods (1) and (2) in the step one to obtain a block mean value M of the current frame image cur Sum variance V cur
(2) Combining the target distribution T of the previous frame f And foreground velocity profile S f Calculating an update proportionality coefficient rate of the background model, wherein the specific calculation process is as follows:
Figure BDA0003883182710000071
wherein S f ∩T f A region indicating that neither the target distribution nor the foreground velocity distribution has a value of 0;
(3) Handle M cur 、V cur And M obtained in step four warp 、V warp Summing according to the obtained proportionality coefficients respectively, wherein the specific calculation process is as follows:
Figure BDA0003883182710000072
Figure BDA0003883182710000073
wherein M is update And V update Respectively representing the updated model mean and variance, i represents the ith image block, and rate represents the set proportional coefficient;
step six, dividing a target area:
(1) Calculating to obtain a foreground response curved surface F according to the gray value of the current frame image and the updated background model map The specific calculation process is as follows:
Figure BDA0003883182710000074
j represents the jth pixel point in the image, I represents the gray value, and I represents the image block number corresponding to the pixel point j;
(2) Will respond to the curved surface F map And judging the part of the median value larger than the threshold value as the foreground, and specifically operating as follows:
Figure BDA0003883182710000081
wherein t represents a set threshold;
step seven, screening and dividing the target area:
(1) Performing connected domain analysis on the foreground image F obtained in the sixth step, and eliminating the regions with too small size and not conforming to the target characteristics of the application scene;
(2) In the current frame image, utilizing the significance to divide the position corresponding to the centroid of the target area obtained in the previous step to obtain the final position and size of the target;
(3) Obtaining target distribution T from the final target image according to a method for calculating the mean value of the background model f
The values in this example are as follows:
ΔB=4,V min =144,g=10。
the previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A moving object detection method is characterized by comprising the following steps,
s1, converting an input image into a gray image, and performing median filtering;
s2, if the image is the first frame image, initializing a background model by using the first frame image, and if the image is not the first frame, skipping to the step S1;
s3, calculating the displacement of the background and the displacement of the foreground of the current frame image and the previous frame image and solving an affine transformation matrix of the background between the two frame images:
s4, performing motion compensation on the existing background model by using the affine transformation matrix obtained in the step S3;
s5, updating the background model;
s6, dividing a target area;
and S7, screening and dividing the target area.
2. The method for detecting a moving object according to claim 1, wherein the matrix method of step S2 is as follows:
s21, performing block operation on the image, wherein the size of each block is delta B multiplied by delta B;
s22, calculating the gray mean M and the variance V of each image block in the following specific calculation mode:
Figure FDA0003883182700000011
wherein M is i Mean value of gray scale, V, representing the ith image block i Represents the variance of the ith image block, I represents the gray-scale value of the image, Δ B represents the size of the block, B i Representing a collection of pixels in the ith image block, j representing image block B i Inner pixel number, V min Represents the minimum value of the set variance (a value greater than 0); i is j
S23, initializing a size of
Figure FDA0003883182700000012
All 1 arrays T of f As the target distribution of the previous frame, where width and height represent the width and height of the image, respectively.
3. The method for detecting a moving object according to claim 2, wherein the matrix method of step S3 is as follows:
s31, grid division is carried out on two frames of images according to length-width intervals g, and an angular point with the most obvious characteristics is selected in each grid of the previous frame of image to serve as a point set P to be solved for the interframe displacement pre
S32, finding P in the previous step in the current frame image by using KLT pre Corresponding point P of each point cur
S33, calculating an affine transformation matrix H of the two corresponding point sets by using RANSAC;
s34, taking the parameters related to the horizontal displacement and the vertical displacement in the affine transformation matrix obtained in the step S33 as the displacement amounts of the inter-frame background in the horizontal direction and the vertical direction respectively, and recording the displacement amounts as the displacement amounts of the inter-frame background in the horizontal direction and the displacement amounts in the vertical direction respectively
Figure FDA0003883182700000021
Then, combining the feature point correspondence between the two frames obtained in step S32, the displacement of the foreground in each image block is obtained
Figure FDA0003883182700000022
And obtaining a velocity distribution map S of the foreground f The specific calculation process is as follows:
Figure FDA0003883182700000023
4. the method for detecting a moving object according to claim 3, wherein the matrix method of the step S4 is as follows:
s41, calculating a coordinate point corresponding to the coordinate point of the current frame background model in the previous frame background model by using an affine transformation matrix, wherein the specific calculation formula is as follows:
X pre =H -1 X cur
wherein X pre And X cur Respectively representing the coordinates in the previous frame and the current background model, and H represents an affine transformation matrix;
s42, obtaining a background model after motion compensation by utilizing bilinear interpolation according to the obtained coordinate corresponding relation, filling 0 in the value of the point beyond the range of the background model of the previous frame, and calculating the value in the effective range as follows:
Figure FDA0003883182700000024
Figure FDA0003883182700000025
wherein M is warp And V warp Respectively, the mean and variance after motion compensation, (x) cur ,y cur ) Coordinate points representing the current frame model, (x) pre ,y pre ) Representing coordinate points of the previous frame model;
when the coordinate point obtained in step S41 satisfies
Figure FDA0003883182700000031
It is calculated according to the formula given above, otherwise the value is 0, where width and height represent the width and height of the image, respectively.
5. The moving object detection method according to claim 4, wherein the matrix method of step S5 is as follows:
s51, calculating the current frame image according to the methods of the step S11 and the step S112 to obtain a block mean value M of the current frame image cur Sum variance V cur
S52, combining the target distribution T of the previous frame f And foreground velocity profile S f Calculating the update proportionality coefficient rate of the background model, wherein the specific calculation process is as follows:
Figure FDA0003883182700000032
wherein S f ∩T f Values representing both the target distribution and the foreground velocity distributionA region other than 0;
s53, removing M cur 、V cur And M obtained in step four warp 、V warp Summing according to the obtained proportionality coefficients respectively, wherein the specific calculation process is as follows:
Figure FDA0003883182700000033
Figure FDA0003883182700000034
wherein M is update And V update Respectively representing the updated model mean and variance, i represents the ith image block, and rate represents the set proportionality coefficient.
6. The method for detecting a moving object according to claim 5, wherein the matrix method of step S6 is as follows:
s61, calculating to obtain a foreground response curved surface F according to the gray value of the current frame image and the updated background model map The specific calculation process is as follows:
Figure FDA0003883182700000041
j represents the jth pixel point in the image, I represents the gray value, and I represents the image block number corresponding to the pixel point j;
s62, responding to the curved surface F map And judging the part of the median value which is greater than the threshold value as the foreground, and specifically operating as follows:
Figure FDA0003883182700000042
where t represents the set threshold.
7. The method for detecting a moving object according to claim 6, wherein the matrix method of step S7 is as follows:
s71, performing connected domain analysis on the foreground image F obtained in the step six, and eliminating regions with too small size and not conforming to the target characteristics of the application scene;
s72, utilizing the significance to divide the position of the centroid alignment of the target area obtained in the previous step in the current frame image to obtain the final position and size of the target;
s73, obtaining target distribution T from the final target image according to a method for calculating the mean value of the background model f
CN202211236531.1A 2022-10-10 2022-10-10 Moving target detection method Pending CN115564805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211236531.1A CN115564805A (en) 2022-10-10 2022-10-10 Moving target detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211236531.1A CN115564805A (en) 2022-10-10 2022-10-10 Moving target detection method

Publications (1)

Publication Number Publication Date
CN115564805A true CN115564805A (en) 2023-01-03

Family

ID=84745438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211236531.1A Pending CN115564805A (en) 2022-10-10 2022-10-10 Moving target detection method

Country Status (1)

Country Link
CN (1) CN115564805A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468751A (en) * 2023-04-25 2023-07-21 北京拙河科技有限公司 High-speed dynamic image detection method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468751A (en) * 2023-04-25 2023-07-21 北京拙河科技有限公司 High-speed dynamic image detection method and device

Similar Documents

Publication Publication Date Title
CN104200485B (en) Video-monitoring-oriented human body tracking method
CN106780576B (en) RGBD data stream-oriented camera pose estimation method
CN104820996B (en) A kind of method for tracking target of the adaptive piecemeal based on video
CN106846359A (en) Moving target method for quick based on video sequence
CN110941999B (en) Method for adaptively calculating size of Gaussian kernel in crowd counting system
CN108876820B (en) Moving target tracking method under shielding condition based on mean shift
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN111144213B (en) Object detection method and related equipment
CN110147816B (en) Method and device for acquiring color depth image and computer storage medium
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN106372598A (en) Image stabilizing method based on image characteristic detection for eliminating video rotation and jittering
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN115564805A (en) Moving target detection method
CN107945207A (en) A kind of real-time object tracking method based on video interframe low-rank related information uniformity
CN110751635A (en) Oral cavity detection method based on interframe difference and HSV color space
CN112541938A (en) Pedestrian speed measuring method, system, medium and computing device
CN116229359A (en) Smoke identification method based on improved classical optical flow method model
CN115565130A (en) Unattended system and monitoring method based on optical flow
CN111144377A (en) Dense area early warning method based on crowd counting algorithm
CN111260725B (en) Dynamic environment-oriented wheel speed meter-assisted visual odometer method
CN111046809B (en) Obstacle detection method, device, equipment and computer readable storage medium
CN107657628A (en) A kind of real-time color method for tracking target
CN104240268B (en) A kind of pedestrian tracting method based on manifold learning and rarefaction representation
CN108205814B (en) Method for generating black and white contour of color image
CN112348853B (en) Particle filter tracking method based on infrared saliency feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination