CN103325112B - Moving target method for quick in dynamic scene - Google Patents

Moving target method for quick in dynamic scene Download PDF

Info

Publication number
CN103325112B
CN103325112B CN201310222645.5A CN201310222645A CN103325112B CN 103325112 B CN103325112 B CN 103325112B CN 201310222645 A CN201310222645 A CN 201310222645A CN 103325112 B CN103325112 B CN 103325112B
Authority
CN
China
Prior art keywords
frame
background
pixel
foreground
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310222645.5A
Other languages
Chinese (zh)
Other versions
CN103325112A (en
Inventor
张红颖
胡正
孙毅刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation University of China
Original Assignee
Civil Aviation University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation University of China filed Critical Civil Aviation University of China
Priority to CN201310222645.5A priority Critical patent/CN103325112B/en
Publication of CN103325112A publication Critical patent/CN103325112A/en
Application granted granted Critical
Publication of CN103325112B publication Critical patent/CN103325112B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

Moving target method for quick in a kind of dynamic scene.It comprises and uses CenSurE unique point and homography transformation model to motion image sequence interframe registration, obtains former frame with present frame registration frame as a reference; This registration frame and present frame are done difference obtain frame difference scheme to generate foreground mask, according to foreground mask in the current frame space distribution information structure real-time update dynamic background, obtain background subtraction figure with background subtraction method; The probability density of each pixel grey scale in statistics frame difference figure and background subtraction figure, when the probability density of a certain pixel grayscale with when being greater than 2 φ (k)-1, using this gray level as adaptivenon-uniform sampling threshold value, the pixel that gray-scale value is greater than this threshold value is judged to foreground pixel, otherwise is background pixel.The inventive method can reach 15 frames/second processing speed, and obtains comparatively entire motion target at guarantee detection speed simultaneously, therefore can meet the index requests such as moving object detection rapidity in dynamic scene, noise immunity, illumination adaptability and target integrity.

Description

Method for rapidly detecting moving target in dynamic scene
Technical Field
The invention belongs to the technical field of civil aviation, and particularly relates to a method for quickly detecting a moving target in a dynamic scene.
Background
The moving target detection is to extract moving objects from video sequence images, and is an important basis for higher-layer processing such as target recognition, tracking, behavior analysis and the like in computer vision. According to the motion state of the camera, moving object detection can be divided into two types, namely moving object detection in a static scene and moving object detection in a dynamic scene. The moving target detection technology in the static scene is relatively mature, and has been widely applied in fixed video monitoring places, and common algorithms include background subtraction method based on a Gaussian mixture model and the like. In a dynamic scene, as the camera and the target move, the difficulty of target detection is increased, so that the method is a hot and difficult problem of the current moving target detection research, and has wide application prospects in the fields of military target striking, aerial photography ground target tracking, panoramic monitoring under a rotary camera and the like.
The current moving object detection method under the dynamic background mainly comprises an optical flow method and a background motion compensation method.
The optical flow method is used for distinguishing moving objects according to the idea that the optical flows have large difference due to the difference of the moving speeds of the object and the background, and the optical flow method for detecting the moving objects has the main advantage that the optical flow method is not limited by the moving state of a camera and can be simultaneously suitable for detecting the moving objects under a static background and a dynamic background. However, the optical flow method has the disadvantages of huge calculation amount, difficulty in meeting the requirement of real-time performance, and large influence of factors such as illumination, noise and target shielding, thereby limiting the application range of the optical flow method.
The background motion compensation method is to register continuous frames through background motion parameters and a transformation model, and convert the problem of detecting a moving object in a dynamic scene into the problem of detecting the moving object in a static scene, and the following documents can be referred to in the motion compensation method:
[1]SUHRJK,JUNGHG,LIG,etal..Backgroundcompensationforpan-tilt-zoomcamerasusing1-Dfeaturematchingandoutlierrejection[J].IEEEtransactionsoncircuitsandsystemsforvideotechnology,2011,21(3):371-377.
[2] wangmei, butcher major, zhou xu super, SIFT feature matching and differential multiplication fused moving target detection [ J ] optical precision engineering, 2011, 19 (4): 892-899.
[3] Moving object detection based on bump differencing in a complex scene [ J ]. optical precision engineering, 2011, 19 (1): 183-191.
After motion compensation, the background between two adjacent frames is relatively static, and the difference pixel of the two frames detected by using a frame difference method is the moving target to be detected. The frame difference method has the main advantages of simple operation, easy realization and better adaptability to the whole illumination change in a scene, but for a moving target with uniform whole gray level distribution, the detection result of the frame difference method has larger holes, so that the target is incomplete and has a ghost image phenomenon.
The double image phenomenon of the target in the detection result can be eliminated by adopting a differential multiplication method on the basis of a frame difference method, but the foreground information can be reduced by the differential multiplication, and a larger cavity phenomenon is caused as a result.
The block-changing difference can eliminate the hole to a certain extent, but the block-dividing processing causes the target edge to have a sawtooth phenomenon, and in the method, the discrimination threshold values of the background block and the foreground block are not easy to determine and are greatly influenced by noise.
For the preliminary detection result of the target, the existing method usually extracts binary foreground pixels by a fixed threshold or an OTSU method, so as to further perform subsequent processing such as target tracking, identification, behavior analysis and the like. The fixed threshold binary method is suitable for segmenting the foreground and the background in a static scene, has the characteristics of simplicity and feasibility, but in a dynamic scene of a camera in a motion state, the foreground pixels segmented by the fixed threshold are inaccurate, and even the effective foreground pixels cannot be segmented. The OTSU method can determine a segmentation threshold value according to a maximum between-class variance principle, can adapt to the change of a scene, but has a poor binary segmentation effect for a target preliminary detection result graph with a gray level histogram without obvious peaks and valleys, so that the risk of target false detection in the scene is greatly increased.
Disclosure of Invention
In order to solve the above problems, the present invention aims to provide a method for rapidly detecting a moving object in a dynamic scene with real-time object detection and result integrity.
In order to achieve the above object, the method for rapidly detecting a moving object provided by the present invention comprises the following steps performed in sequence:
1) registering the moving image sequence frames rapidly and accurately by using the cenSurE characteristic points and the homography transformation model so as to compensate the translation, rotation and scaling of the background between the frames caused by the motion of a camera, thereby obtaining a registration frame of the previous frame;
2) in time sequence, performing difference on a current frame and a registration frame of the previous frame to obtain a frame difference image so as to generate a moving target foreground mask, then constructing a real-time updated dynamic background according to spatial distribution information of the foreground mask in the current frame, and finally obtaining a background subtraction image containing a moving foreground target by using a background subtraction method;
3) and (3) counting the probability density of each pixel gray level in the frame difference image and the background subtraction image in the step 2) by adopting a histogram, wherein when the probability density sum of a certain pixel gray level is greater than a threshold value 2 phi (k) -1, the self-adaptive segmentation threshold value is obtained, and the pixel with the gray value greater than the threshold value is judged as a foreground pixel, otherwise, the pixel is judged as a background pixel.
The registration method in the step 1) is as follows: firstly, CenSurE feature points of two adjacent frames before and after are extracted, a feature point descriptor is generated by using U-SURF, then Euclidean distance is used as feature similarity measurement, a feature classification strategy is adopted to quickly match feature point sets of the two adjacent frames, partial external points are filtered out by a random sampling consistency algorithm to obtain accurate background matching point pairs, finally, an accurate interframe homography matrix is calculated by using a least square method, and the previous frame is transformed according to the homography matrix to obtain a registration frame of the previous frame.
The method for generating the background subtraction map containing the foreground moving object in the step 2) comprises the following steps: firstly, performing difference on a current frame of a moving image sequence and a registration frame of the previous frame to obtain a frame difference image, then performing self-adaptive binary segmentation on the frame difference image, detecting a moving target image block by using a contour detection method and calibrating the area by using a minimum external matrix, thereby obtaining a foreground mask containing the maximum possible area of the moving target in a time domain; then, the first frame of the sequence is taken as an initial background frame, a corresponding foreground mask area in the background frame is replaced by a corresponding area of the registration frame obtained in the step 1) of the previous frame in real time, other areas of the background frame are updated by a corresponding area of the current sequence, so that a dynamically updated real-time background image is obtained, and finally, a background subtraction image containing a foreground moving target is obtained by a background subtraction method.
The segmentation method of the self-adaptive segmentation threshold in the step 3) is as follows: counting the difference value of each pixel on the frame difference image and the background subtraction image in the step 2) and the average value of all pixels of the frame, if the difference value is smaller than a certain threshold value, judging the pixel as a background pixel, otherwise, judging the pixel as a foreground pixel; and then according to the normal distribution rule of the random variable, counting the distribution probability of the gray level of each pixel, if the distribution probability is greater than a certain threshold, judging the pixel as a foreground pixel, and if not, judging the pixel as a background pixel, wherein the gray level corresponding to the pixel is the self-adaptive segmentation threshold.
The method for rapidly detecting the moving target in the dynamic scene can reach the processing speed of 15 frames/second, and can obtain a complete moving target while ensuring the detection speed, so that the method can basically meet the requirements of indexes such as rapidity, noise resistance, illumination adaptability, target integrity and the like of moving target detection in the dynamic scene.
Drawings
Fig. 1 is a flowchart of a method for rapidly detecting a moving object in a dynamic scene according to the present invention.
Fig. 2 a-2 d are respectively the adjacent two frames of images in the Coastguard standard test sequence, the moving object detection result of the images by using differential phase multiplication, and the moving object detection result of the images by using the method of the present invention.
Fig. 3 a-3 d are the moving object detection results of two adjacent frames of images in the Stefan standard test sequence and the images by using differential phase multiplication and the images by using the method of the present invention, respectively.
Fig. 4 a-4 d are the images of two adjacent frames in the standard test sequence of the indoor robot, the moving object detection result of the images by using differential phase multiplication, and the moving object detection result of the images by using the method of the present invention, respectively.
Fig. 5 is a result of performing foreground binary segmentation on the moving object detection results in the images of fig. 2b, fig. 3b, and fig. 4b by using the OTSU method.
Fig. 6 shows the result of foreground binary segmentation of the moving object detection result in the images of fig. 2b, fig. 3b and fig. 4b by using the method of the present invention.
Detailed Description
The following describes in detail a method for rapidly detecting a moving object in a dynamic scene, with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, the method for rapidly detecting a moving object in a dynamic scene provided by the present invention includes the following steps performed in sequence:
1) firstly, according to the characteristics of rapidity and accuracy of the extraction of the cenSurE characteristic points, the characteristic points and the homography transformation model are used for rapidly and accurately registering the motion image sequence inter-frame, so that the translation, rotation and scaling quantity of the inter-frame background caused by the motion of a camera are compensated, and a registration frame of the previous frame is obtained;
the registration method is as follows: firstly, CenSurE feature points of two adjacent frames before and after are extracted, a feature point descriptor is generated by U-SURF, then Euclidean distance is used as feature similarity measurement, a feature classification strategy is adopted to quickly match feature point sets of the two adjacent frames, partial external points are filtered out by a random sample consensus (RANSAC) algorithm to obtain accurate background matching point pairs, finally, an accurate interframe homography matrix is calculated by a least square method, and the previous frame is resampled according to the homography matrix to obtain a registration frame of the current frame.
The key to registration is to compute the inter-frame transformation relationship of the motion image sequence and then compensate for background motion due to camera motion by this transformation. Planar homography is defined as the projection mapping from one plane to another, and the homography matrix relates the location of a set of points on the source image plane to the location of a set of points on the target image plane.
In practical engineering, the distance between two adjacent frames is usually only changed in a small range of a few pixels, and a dynamic scene is slowly changed without mutation, so that a feature point extraction algorithm required for solving a homography matrix needs to have good real-time performance and good invariance to small-scale translation, rotation and scaling changes, and has adaptability to illumination, noise and a certain degree of visual angle change, and CenSurE can better meet the requirements.
The cenSurE is a local invariant feature with extremely high calculation efficiency, and the main idea is to construct a scale space by using a double-layer Gaussian Laplacian filter, calculate the center surrounding Harr wavelet response value of each pixel by using an integral image in an accelerated manner, detect a local extreme value by using a non-maximum suppression method, and finally filter unstable points with small response values and distributed on edges or lines.
Considering that the angle deviation between two adjacent frames is not large, the U-SURF feature description mentioned in the SURF algorithm proposed by Bay et al can well meet the requirement of feature point robustness under the condition of small-angle rotation, and the operation speed is highxAnd dyAnd respectively given different weight coefficients, and then a four-dimensional vector V (∑ d)x,∑dy,∑|dx|,∑|dy|) describing the sub-region, performing the same operation on 16 sub-regions to obtain a 64-dimensional feature description vector, and finally removing the influence of illumination on the descriptor by normalization.
The Euclidean distance is adopted as the similarity measurement of the feature vector, for any feature point in the feature point set of the current frame, two feature points with the Euclidean distance being the nearest and the second nearest are found out in the feature point set of the previous frame to be registered, and if the ratio of the nearest distance to the second nearest meets the following requirements:
dmore recently, the development of new and more recently developed devices/dSecond order<T(1)
The two feature points closest to each other are considered to be successfully matched. Considering that the CenSurE feature points have two types of maximum values and minimum values, the invention classifies the CenSurE feature points, improves the matching speed and improves the matching accuracy. At this time, some matching point pairs may be from the moving foreground object or there are few mismatching point pairs, which are filtered out by using random sample consensus (RANSAC).
Plane homography is defined as the projection mapping from one plane to another, which relates the position of the feature point set of the previous frame to be registered to the position of the feature point set of the current frame, homography matrix:
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 = h 1 T h 2 T h 3 T - - - ( 2 )
whereinThe planar homography matrix has only 8 degrees of freedom, let h33=1。
Let p be (x, y, l)TAnd q ═ 1 (u, v)TIs the homogeneous coordinate of the matching point pair, then p is transformed to q by the homography matrix:
q=Hp(3)
wherein H includes the changes of translation, rotation and scaling between two adjacent frames.
The above formula is developed to obtain:
u v 1 = h 1 T p h 2 T p h 3 T p = h 1 T p / h 3 T p h 2 T p / h 3 T p 1 , namely: h 1 T p - u ( h 3 T p ) = 0 h 2 T p - v ( h 3 T p ) = 0 , will be provided with h i T = ( h i 1 , h i 2 , h i 3 ) T The substitution arrangement gives information about hijThe system of equations of the eight-membered equation:
h 11 x + h 12 y + h 13 - u h 31 x - u h 32 y = u h 21 x + h 22 y + h 23 - v h 31 x - v h 32 y = v - - - ( 4 )
from equation (4), theoretically, only 4 matching point pairs are needed to calculate the planar homography matrix with 8 degrees of freedom. In order to obtain more accurate and robust transformation parameters, more matching point pairs are extracted in a background area, and an optimal transformation matrix is solved through a least square method. The matrix representation is shown in formula (5).
AX=B(5)
Wherein,
A 2 n × 8 = x 1 y 1 1 0 0 0 - x 1 u 1 - y 1 u 1 · · · · · · · · · · · · · · · · · · · · · · · · x 1 y 1 1 - x 1 v 1 - y 1 v 1 2 n × 8 ,
X8×1=(h11h12h13h21h22h23h31h32)T,B2n×1=(x1...u1...)2n×1 T,(xi,yi) And (u)i,vi) Respectively are the coordinates of the matched background characteristic point pairs in the previous frame and the current frame, and n is more than or equal to 4.
And mapping the previous frame onto the registration frame by utilizing the frame homography matrix, wherein the pixel gray value at the non-integer position is obtained by bilinear interpolation, and the changes of background rotation, scaling, translation and the like caused by the motion of the camera are compensated. The six-parameter affine transformation model can also describe linear transformation such as translation, rotation and scaling of a plane image, and is used for global motion estimation of a background under a motion camera, but the model can only perform parallel mapping on the plane image, which requires that a target scene is far enough away from the camera, so that the target scene can be regarded as a plane. In fact, an affine transformation can be understood as a transformation in which the homography matrix element h is planar homography31=h32As a special case when 0, the registration model in the present invention can describe a plane-to-plane mapping relationship in a 3D space, which is more general than an affine transformation model.
2) After the inter-frame registration of the moving image sequence is carried out in the step 1), the background between two adjacent frames of the sequence is relatively static, and the changes of translation, rotation, scaling and the like of the background in a scene caused by the motion of a camera and the like are eliminated, so that the main difference in the scene comes from the motion of a foreground object.
The invention utilizes a method of combining space-time information to extract a relatively complete moving target, and the general idea is as follows: firstly, in time sequence, performing difference on a current frame and a registration frame of the previous frame to obtain a frame difference image so as to generate a moving target foreground mask, then constructing a dynamic background updated in real time according to spatial distribution information of the foreground mask in the current frame, and finally obtaining a background subtraction image containing a moving foreground target by using a background subtraction method;
let f (x, y, t) be the t-th frame of the motion image sequence, and f' (x, y, t-1) be the registration result of the t-1 frame image when the t-th frame of the sequence is taken as the reference frame, and the frame difference map is obtained by the frame difference method as follows:
dif(x,y,t)=|f(x,y,t)-f′(x,y,t-1)|(6)
dividing the frame difference image into two values:
difB = ( x , y , t ) = 1 , dif ( x , y , t ) &GreaterEqual; Th 1 0 , dif ( x , y , t ) < Th 1 - - - ( 7 )
wherein Th1For the adaptive segmentation threshold of the foreground and the background of the frame difference map, step 3) introduces the determination method of the adaptive segmentation threshold.
And detecting a moving target block in the binarized frame difference image by using a contour detection method, removing a noise block with a smaller area, calibrating the moving target block by using a minimum circumscribed matrix algorithm, setting the gray value of a pixel in the region as 1, and setting the gray values of other pixels as 0. Thus, a time-domain moving target foreground mask M (x, y, t) is obtained:
it contains the largest possible area of the moving object.
Background subtraction can extract more complete moving foreground objects than frame subtraction, creating a real-time updated dynamic background B (x, y, t):
(1) first, a first frame of a moving image sequence is taken as a first frame background B (x, y, t) ═ f (x, y, t) ═ 1.
(2) Updating a background image in real time according to the spatial distribution information of the foreground mask, wherein the background updating principle is as follows: the motion foreground mask area is replaced by the background area of the registration frame obtained in step 1) of the previous frame, and other areas are updated by the current motion image sequence frame, namely:
wherein B' (x, y, t-1) represents an image of the sequence image at the time of t-1, which is registered by the step 1) by taking the current frame as a reference frame:
B′(x,y,t-1)=T(B(x,y,t-1))(10)
t (-) represents the homography transformation between the previous frame and the current frame mentioned in step 1),in order to update the rate factor for the background,
and finally, obtaining a background subtraction image containing the moving foreground object by using a background subtraction method:
Dif(x,y,t)=f(x,y,t)-B(x,y,t)(11)
binary segmentation is carried out on the background subtraction image to obtain a moving foreground object:
F ( x , y , t ) = 255 , Dif ( x , y , t ) &GreaterEqual; Th 2 0 , Dif ( x , y , t ) < Th 2 - - - ( 12 )
wherein Th2Adaptation of foreground and background for background subtraction mapsThe threshold value should be segmented, and step 3) introduces the determination method of the self-adaptive segmentation threshold value.
3) In order to enable the adaptive segmentation threshold values of the foreground and the background in the step 2) to be adaptive to the change of the scene, the invention finally provides a method for calculating the adaptive segmentation threshold values of the foreground and the background based on probability statistics so as to realize the rapid and accurate segmentation of the foreground target.
The adaptive segmentation threshold of the background and the foreground cannot be too small, otherwise, excessive noise is introduced, and the adaptive segmentation threshold cannot be too large, otherwise, a lot of moving target foreground points are missed. The Otsu algorithm determines a segmentation threshold according to the maximum inter-class variance principle, but the binary segmentation effect is not good for a frame difference image and a background subtraction image of which the gray level histogram does not have obvious peaks and valleys.
The invention provides a fast self-adaptive segmentation threshold calculation method of a probability statistical method based on the discrimination idea that background points | X-mu | are less than or equal to 2.5 sigma in a mixed Gaussian background modeling algorithm and by fully utilizing the characteristic of gray normal distribution in a frame difference graph and a background subtraction graph. Specifically, the invention adopts the probability density of each pixel gray level in the frame difference image and the background subtraction image in the histogram statistics step 2), when the probability density sum of a certain pixel gray level is more than 2 phi (k) -1, the gray level is taken as an adaptive segmentation threshold, and the pixel with the gray level more than the threshold is judged as a foreground pixel, otherwise, the pixel is judged as a background pixel.
Counting the difference value of each pixel and the average value of all pixels of the frame for the frame difference image and the background subtraction image in the step 2), if the difference value is less than a certain threshold value, judging the pixel as a background pixel, otherwise, judging the pixel as a foreground pixel:
the normal distribution rule according to the random variable includes:
P{|d(x,y,t)-ut|<kt}
=P{-kt<d(x,y,t)-ut<kt}
=P{ut-kt<d(x,y,t)<ut+kt}
=φ(k)-φ(-k)
=2φ(k)-1(14)
where φ (-) represents a standard normal distribution function, equation (14) illustrates that for each pixel in the frame difference map and the background subtraction map in step 2), it is a foreground pixel when its gray level is greater than 2 φ (k) -1, and otherwise it is a background pixel. The foreground and background adaptive segmentation threshold method does not need to explicitly calculate the specific mean value and variance of each frame of pixel, greatly simplifies the operation, and has the characteristics of simplicity and high efficiency.
To verify the effect of the present invention, the present inventors have shown standard test sequences under a motion camera using visual studio2010 integration development environment and OpenCV2.3.1 on a PC configured as Pentium (R) Dual-core2.70GHzCPU, 2GBRAM, 1) Coastguard standard test sequence of size 352 × 288, 2) Stefan standard test sequence of size 352 × 288, 3) indoor robot (Robots) follow-up sequence (shots) of size 320 × 240http://www.ces.clemson.edu/~stb/images/) The results of the tests were performed as shown in fig. 2-6.
The method fully utilizes the robustness of the cenSurE characteristics to small-scale changes such as scaling, rotation and translation and the like and the accuracy of the positions of the characteristic points, ensures the accuracy of inter-frame transformation parameter calculation, and is more suitable for the registration of inter-frame backgrounds in the motion scene of a general camera than an affine change model.
The extremely high calculation efficiency and the rapidity of the U-SURF descriptor of the CenSurE operator and the high efficiency of foreground segmentation of the probability statistical method adopted by the method enable the method to have a high running speed, the experimental result of the test sequence to reach 15 frames/s, and compared with a differential multiplication algorithm based on SIFT feature matching, the processing speed is improved by nearly 10 times, and the method is shown in the following table 1.
TABLE 1 comparison of the time consumption of the different methods
Comparing fig. 2c with fig. 2d, fig. 3c with fig. 3d, and fig. 4c with fig. 4d, it can be seen that the present invention can detect more moving foreground pixels by using the space-time correlation algorithm than the differential multiplication algorithm while ensuring the speed of target detection, the detected target is more complete, and the risk of target missing detection is greatly reduced.
Finally, comparing the foreground segmentation method with the foreground segmentation method shown in fig. 5 and fig. 6, the foreground and background segmentation result of the OTSU method introduces excessive noise, and the probability statistics-based method can segment more accurate foreground pixels.
In summary, the invention simultaneously considers the requirements of real-time performance and integrity of moving target detection in a dynamic scene, improves the integrity of a target detection result under the condition of reducing algorithm time consumption, and is particularly suitable for detecting the moving target under the condition of slow movement of a camera in a complex scene.

Claims (2)

1. A method for rapidly detecting a moving target in a dynamic scene comprises the following steps in sequence:
1) registering the moving image sequence frames rapidly and accurately by using the cenSurE characteristic points and the homography transformation model so as to compensate the translation, rotation and scaling of the background between the frames caused by the motion of a camera, thereby obtaining a registration frame of the previous frame;
2) in time sequence, performing difference on a current frame and a registration frame of the previous frame to obtain a frame difference image so as to generate a moving target foreground mask, then constructing a real-time updated dynamic background according to spatial distribution information of the foreground mask in the current frame, and finally obtaining a background subtraction image containing a moving foreground target by using a background subtraction method;
3) adopting a histogram to count the probability density of each pixel gray level in the frame difference image and the background subtraction image in the step 2), when the probability density sum of a certain pixel gray level is greater than a threshold value 2 phi (k) -1, determining the probability density sum to be an obtained self-adaptive segmentation threshold value, wherein k is the pixel gray level, phi (k) represents a standard normal distribution density function, and determining the pixel with the gray value greater than the threshold value as a foreground pixel, otherwise, determining the pixel as a background pixel;
the method is characterized in that: the registration method in the step 1) is as follows: firstly, CenSurE feature points of two adjacent frames before and after are extracted, a feature point descriptor is generated by using U-SURF, then Euclidean distance is used as feature similarity measurement, a feature classification strategy is adopted to quickly match feature point sets of the two adjacent frames, partial external points are filtered out by a random sampling consistency algorithm to obtain accurate background matching point pairs, finally, an accurate interframe homography matrix is calculated by using a least square method, and the previous frame is transformed according to the homography matrix to obtain a registration frame of the previous frame.
2. The method for rapidly detecting the moving object in the dynamic scene according to claim 1, wherein: the method for generating the background subtraction map containing the foreground moving object in the step 2) comprises the following steps: firstly, performing difference on a current frame of a moving image sequence and a registration frame of the previous frame to obtain a frame difference image, then performing self-adaptive binary segmentation on the frame difference image, detecting a moving target image block by using a contour detection method and calibrating the moving target image block by using a minimum external matrix, thereby obtaining a foreground mask containing a maximum possible area of a moving target in a time domain; then, the first frame of the sequence is taken as an initial background frame, a corresponding foreground mask area in the background frame is replaced by a corresponding area of the registration frame obtained in the step 1) of the previous frame in real time, other areas of the background frame are updated by a corresponding area of the current sequence, so that a dynamically updated real-time background image is obtained, and finally, a background subtraction image containing a foreground moving target is obtained by a background subtraction method.
CN201310222645.5A 2013-06-07 2013-06-07 Moving target method for quick in dynamic scene Expired - Fee Related CN103325112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310222645.5A CN103325112B (en) 2013-06-07 2013-06-07 Moving target method for quick in dynamic scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310222645.5A CN103325112B (en) 2013-06-07 2013-06-07 Moving target method for quick in dynamic scene

Publications (2)

Publication Number Publication Date
CN103325112A CN103325112A (en) 2013-09-25
CN103325112B true CN103325112B (en) 2016-03-23

Family

ID=49193835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310222645.5A Expired - Fee Related CN103325112B (en) 2013-06-07 2013-06-07 Moving target method for quick in dynamic scene

Country Status (1)

Country Link
CN (1) CN103325112B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389618A (en) * 2017-08-04 2019-02-26 列日大学 Foreground and background detection method

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9401027B2 (en) * 2013-10-21 2016-07-26 Nokia Technologies Oy Method and apparatus for scene segmentation from focal stack images
CN103729857B (en) * 2013-12-09 2016-12-07 南京理工大学 Moving target detecting method under mobile camera based on second compensation
CN103942813A (en) * 2014-03-21 2014-07-23 杭州电子科技大学 Single-moving-object real-time detection method in electric wheelchair movement process
CN104036245B (en) * 2014-06-10 2018-04-06 电子科技大学 A kind of biological feather recognition method based on online Feature Points Matching
CN105469421A (en) * 2014-09-04 2016-04-06 南京理工大学 Method based on panoramic system for achieving monitoring of ground moving target
CN104567815B (en) * 2014-12-26 2017-04-19 北京航天控制仪器研究所 Image-matching-based automatic reconnaissance system of unmanned aerial vehicle mounted photoelectric stabilization platform
CN106651902A (en) * 2015-11-02 2017-05-10 李嘉禾 Building intelligent early warning method and system
CN105551033B (en) * 2015-12-09 2019-11-26 广州视源电子科技股份有限公司 Component marking method, system and device
CN105741315B (en) * 2015-12-30 2019-04-02 电子科技大学 A kind of statistics background subtraction method based on down-sampled strategy
CN107292910B (en) * 2016-04-12 2020-08-11 南京理工大学 Moving target detection method under mobile camera based on pixel modeling
CN107316313B (en) * 2016-04-15 2020-12-11 株式会社理光 Scene segmentation method and device
CN106127801A (en) * 2016-06-16 2016-11-16 乐视控股(北京)有限公司 A kind of method and apparatus of moving region detection
CN106447694A (en) * 2016-07-28 2017-02-22 上海体育科学研究所 Video badminton motion detection and tracking method
CN106331625A (en) * 2016-08-30 2017-01-11 天津天地伟业数码科技有限公司 Indoor single human body target PTZ tracking method
CN106227216B (en) * 2016-08-31 2019-11-12 朱明� Home-services robot towards house old man
CN106780541B (en) * 2016-12-28 2019-06-14 南京师范大学 A kind of improved background subtraction method
CN106875415B (en) * 2016-12-29 2020-06-02 北京理工雷科电子信息技术有限公司 Continuous and stable tracking method for small and weak moving targets in dynamic background
CN106651903B (en) * 2016-12-30 2019-08-09 明见(厦门)技术有限公司 A kind of Mobile object detection method
CN107563961A (en) * 2017-09-01 2018-01-09 首都师范大学 A kind of system and method for the moving-target detection based on camera sensor
CN109697724B (en) * 2017-10-24 2021-02-26 北京京东尚科信息技术有限公司 Video image segmentation method and device, storage medium and electronic equipment
CN107911663A (en) * 2017-11-27 2018-04-13 江苏理工学院 A kind of elevator passenger hazardous act intelligent recognition early warning system based on Computer Vision Detection
CN108196285B (en) * 2017-11-30 2021-12-17 中山大学 Accurate positioning system based on multi-sensor fusion
CN108109163A (en) * 2017-12-18 2018-06-01 中国科学院长春光学精密机械与物理研究所 A kind of moving target detecting method for video of taking photo by plane
CN108154520B (en) * 2017-12-25 2019-01-08 北京航空航天大学 A kind of moving target detecting method based on light stream and frame matching
CN110033455B (en) * 2018-01-11 2023-01-03 上海交通大学 Method for extracting target object information from video
CN108305267B (en) * 2018-02-14 2020-08-11 北京市商汤科技开发有限公司 Object segmentation method, device, apparatus, storage medium, and program
CN108846844B (en) * 2018-04-13 2022-02-08 上海大学 Sea surface target detection method based on sea antenna
CN108830834B (en) * 2018-05-23 2022-03-11 重庆交通大学 Automatic extraction method for video defect information of cable climbing robot
CN109035257B (en) * 2018-07-02 2021-08-31 百度在线网络技术(北京)有限公司 Portrait segmentation method, device and equipment
CN109085658B (en) * 2018-07-09 2019-11-15 宁波大学 A kind of indoor human body sensing device
CN109632590B (en) * 2019-01-08 2020-04-17 上海大学 Deep-sea luminous plankton detection method
CN109782811B (en) * 2019-02-02 2021-10-08 绥化学院 Automatic following control system and method for unmanned model vehicle
CN110163831B (en) * 2019-04-19 2021-04-23 深圳市思为软件技术有限公司 Method and device for dynamically displaying object of three-dimensional virtual sand table and terminal equipment
CN110349189A (en) * 2019-05-31 2019-10-18 广州铁路职业技术学院(广州铁路机械学校) A kind of background image update method based on continuous inter-frame difference
EP3798976B1 (en) 2019-09-30 2023-11-01 Tata Consultancy Services Limited Method and system for determining dynamism in a scene by processing a depth image
CN110956219B (en) * 2019-12-09 2023-11-14 爱芯元智半导体(宁波)有限公司 Video data processing method, device and electronic system
CN111260695A (en) * 2020-01-17 2020-06-09 桂林理工大学 Throw-away sundry identification algorithm, system, server and medium
CN111612811B (en) * 2020-06-05 2021-02-19 中国人民解放军军事科学院国防科技创新研究院 Video foreground information extraction method and system
CN113781571A (en) * 2021-02-09 2021-12-10 北京沃东天骏信息技术有限公司 Image processing method and device
CN113409353B (en) * 2021-06-04 2023-08-01 杭州联吉技术有限公司 Motion prospect detection method, motion prospect detection device, terminal equipment and storage medium
CN113456027B (en) * 2021-06-24 2023-12-22 南京润楠医疗电子研究院有限公司 Sleep parameter assessment method based on wireless signals
CN113538270A (en) * 2021-07-09 2021-10-22 厦门亿联网络技术股份有限公司 Portrait background blurring method and device
CN113408669B (en) * 2021-07-30 2023-06-16 浙江大华技术股份有限公司 Image determining method and device, storage medium and electronic device
CN114077877B (en) * 2022-01-19 2022-05-13 人民中科(北京)智能技术有限公司 Newly-added garbage identification method and device, computer equipment and storage medium
CN114581482B (en) * 2022-03-09 2023-05-02 湖南中科助英智能科技研究院有限公司 Method, device and equipment for detecting moving object under moving platform
CN116030367B (en) * 2023-03-27 2023-06-20 山东智航智能装备有限公司 Unmanned aerial vehicle viewing angle moving target detection method and device
CN116188534B (en) * 2023-05-04 2023-08-08 广东工业大学 Indoor real-time human body tracking method, storage medium and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022505A (en) * 2007-03-23 2007-08-22 中国科学院光电技术研究所 Method and device for automatically detecting moving target under complex background
CN103123726A (en) * 2012-09-07 2013-05-29 佳都新太科技股份有限公司 Target tracking algorithm based on movement behavior analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022505A (en) * 2007-03-23 2007-08-22 中国科学院光电技术研究所 Method and device for automatically detecting moving target under complex background
CN103123726A (en) * 2012-09-07 2013-05-29 佳都新太科技股份有限公司 Target tracking algorithm based on movement behavior analysis

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《CenSurE:Center Surround Extremas for Realtime Feature Detection and Matching》;Motilal Agrawal et al;《Computer Vision-ECCV 2008》;20081231;第5305卷;第102-115页 *
《基于全局单应性变换的虚拟注册方法》;管涛等;《华中科技大学学报(自然科学版)》;20070430;第35卷(第4期);第100-102页 *
《基于视觉的运动目标检测与跟踪算法研究》;江文斌;《中国优秀硕士学位论文全文数据库信息科技辑》;20120915(第9期);第I138-506页 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389618A (en) * 2017-08-04 2019-02-26 列日大学 Foreground and background detection method
CN109389618B (en) * 2017-08-04 2022-03-01 列日大学 Foreground and background detection method

Also Published As

Publication number Publication date
CN103325112A (en) 2013-09-25

Similar Documents

Publication Publication Date Title
CN103325112B (en) Moving target method for quick in dynamic scene
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
JP6095018B2 (en) Detection and tracking of moving objects
CN109961506A (en) A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure
US20180268237A1 (en) Method and system for determining at least one property related to at least part of a real environment
Delmerico et al. Building facade detection, segmentation, and parameter estimation for mobile robot localization and guidance
WO2019071976A1 (en) Panoramic image saliency detection method based on regional growth and eye movement model
WO2018049704A1 (en) Vehicle detection, tracking and localization based on enhanced anti-perspective transformation
WO2019057197A1 (en) Visual tracking method and apparatus for moving target, electronic device and storage medium
Wang et al. Robust edge-based 3D object tracking with direction-based pose validation
CN106462975A (en) Method and apparatus for object tracking and segmentation via background tracking
Burlacu et al. Obstacle detection in stereo sequences using multiple representations of the disparity map
CN113689365B (en) Target tracking and positioning method based on Azure Kinect
Wang et al. Hand posture recognition from disparity cost map
CN110910497A (en) Method and system for realizing augmented reality map
Liu et al. [Retracted] Mean Shift Fusion Color Histogram Algorithm for Nonrigid Complex Target Tracking in Sports Video
van de Wouw et al. Hierarchical 2.5-d scene alignment for change detection with large viewpoint differences
Zhang et al. An IR and visible image sequence automatic registration method based on optical flow
CN101685538B (en) Method and device for tracking object
Zhou et al. Target tracking based on foreground probability
Walha et al. Moving object detection system in aerial video surveillance
Zhou et al. Speeded-up robust features based moving object detection on shaky video
Yoshimoto et al. Cubistic representation for real-time 3D shape and pose estimation of unknown rigid object
Mohammed et al. An improved CAMShift algorithm for object detection and extraction
Zheng et al. Semantic plane-structure based motion detection with a nonstationary camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160323

Termination date: 20180607

CF01 Termination of patent right due to non-payment of annual fee