CN103325112A - Quick detecting method for moving objects in dynamic scene - Google Patents

Quick detecting method for moving objects in dynamic scene Download PDF

Info

Publication number
CN103325112A
CN103325112A CN2013102226455A CN201310222645A CN103325112A CN 103325112 A CN103325112 A CN 103325112A CN 2013102226455 A CN2013102226455 A CN 2013102226455A CN 201310222645 A CN201310222645 A CN 201310222645A CN 103325112 A CN103325112 A CN 103325112A
Authority
CN
China
Prior art keywords
frame
background
pixel
foreground
moving target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102226455A
Other languages
Chinese (zh)
Other versions
CN103325112B (en
Inventor
张红颖
胡正
孙毅刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation University of China
Original Assignee
Civil Aviation University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation University of China filed Critical Civil Aviation University of China
Priority to CN201310222645.5A priority Critical patent/CN103325112B/en
Publication of CN103325112A publication Critical patent/CN103325112A/en
Application granted granted Critical
Publication of CN103325112B publication Critical patent/CN103325112B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

Provided is a quick detecting method for moving objects in a dynamic scene. The quick detecting method for the moving objects in the dynamic scene comprises carrying out sequence interframe registration on moving images by utilizing CenSurE feature points and a homography transformation model, obtaining a registering frame of a former frame taking a current frame as reference, carrying out subtraction on the registering frame with the current frame to obtain a frame difference image to generate a foreground mask, building a dynamic background updated in real time according to space distribution information of the foreground mask in the current frame, obtaining a background subtraction image based on a background subtraction method, carrying out statistics on the probability density of the gray level of each pixel in the frame difference image, when the sum of the probability density of the gray level of a pixel is larger than 2phi(k)-1, taking the gray level as a self-adaptation threshold value, judging pixels with values of gray levels larger than the threshold value as foreground pixels, and otherwise judging the pixels as background pixels. The quick detecting method for the moving objects in the dynamic scene can reach the processing speed of 15frame/s and can obtain relatively integral moving objects under the premise that the detecting speed is ensured, and therefore, index requirements such as rapidity, noise immunity, illumination adaptation, target integrity and the like of the detection of the moving objects in the dynamic scene can be met.

Description

Moving target method for quick in the dynamic scene
Technical field
The invention belongs to the civil aviation technical field, particularly relate to moving target method for quick in a kind of dynamic scene.
Background technology
Moving object detection is to extract moving object from video sequence image, is the important foundation of the more high-rise processing such as target identification, tracking and behavioural analysis in the computer vision.According to the motion state of video camera, moving object detection can be divided in the static scene moving object detection two classes in the moving object detection and dynamic scene.Wherein Detection for Moving Target is relatively ripe in the static scene, has had widely in fixed video monitoring place and has used, and algorithm commonly used has based on the background subtraction method of mixed Gauss model etc.In dynamic scene, because video camera and target are all in motion, therefore increased the difficulty of target detection, so be focus and the difficulties of present moving object detection research, and it has broad application prospects in military target strike, the tracking of the terrain object of taking photo by plane and the fields such as overall view monitoring under the rotary camera.
Moving target detecting method mainly contains optical flow method and Background Motion Compensation method two large classes under the dynamic background at present.
Wherein optical flow method is to cause light stream to exist the thought of larger difference to differentiate moving object according to target and background movement velocity difference, the major advantage that optical flow method detects moving target is the restriction that is not subjected to the camera motion state, can be applicable to simultaneously the moving object detection under static background and the dynamic background.But it is huge that the shortcoming of optical flow method is calculated amount, therefore is difficult to satisfy the requirement of real-time, and is subjected to the impact of the factors such as illumination, noise and target occlusion larger, thereby limited its range of application.
The Background Motion Compensation method is by background motion parameter and transformation model successive frame to be carried out registration, and moving object detection problem in the dynamic scene is converted into moving object detection problem under the static scene, and motion compensation method can be with reference to following document:
[1]SUHR J K,JUNG H G,LI G,et al..Background compensation for pan-tilt-zoom cameras using1-D feature matching and outlier rejection[J].IEEE transactions on circuits and systems for video technology,2011,21(3):371-377.
[2] Wang Mei, Tu Dawei is permitted the moving object detection [J] that super .SIFT characteristic matching and difference multiply each other and merge week. optical precision engineering, 2011,19 (4): 892-899.
[3] Zhu Juanjuan, Guo Baolong. in the complex scene based on the moving object detection [J] that becomes the piece difference. optical precision engineering, 2011,19 (1): 183-191.
After the motion compensation between adjacent two frames background relatively static, be exactly moving target to be detected with the detected two frame difference pixels of frame difference method.The major advantage of frame difference method is that computing is simple, be easy to realize, and global illumination in the scene changed have preferably adaptability, but the moving target that is evenly distributed for overall intensity, the testing result of frame difference method has larger cavity, so target is imperfect and ghost phenomena arranged.
Adopt method that difference multiplies each other can eliminate the ghost phenomena of target in the testing result on the basis of frame difference method, but difference multiplies each other and can reduce foreground information, the result causes larger cavitation.
Become the piece difference and can eliminate to a certain extent the cavity, but piecemeal processes so that object edge has crenellated phenomena, and the discrimination threshold of background piece and foreground blocks is difficult for determining in the method, and affected by noise larger.
For the Preliminary detection result of target, existing method is fixed threshold or OTSU method extraction two-value foreground pixel normally, in order to further carry out the subsequent treatment such as target following, identification and behavioural analysis.Wherein fixed threshold two-value method is applicable to cutting apart of prospect and background in the static scene, have simple characteristics, but in the dynamic scene that video camera is kept in motion, the foreground pixel that fixed threshold is partitioned into will be inaccurate, even cut apart less than effective foreground pixel.The OTSU method can be determined segmentation threshold according to the maximum between-cluster variance principle, the variation of energy self-adaptation scene, but the target Preliminary detection that does not have obvious peak and paddy for grey level histogram is figure as a result, OTSU method binary segmentation effect is bad, and this will increase the risk of target flase drop in the scene greatly.
Summary of the invention
In order to address the above problem, the object of the present invention is to provide a kind of target detection real-time and moving target method for quick in the dynamic scene of integrality as a result.
In order to achieve the above object, moving target method for quick provided by the invention comprises the following step that carries out in order:
1) uses CenSurE unique point and homography transformation model quickly and accurately motion image sequence interframe to be carried out registration, thereby compensate translation, the Rotation and Zoom amount of the interframe background that causes because of camera motion, to obtain the registration frame of former frame;
2) on sequential, the registration frame of present frame and above-mentioned former frame is made the poor poor figure of frame that obtains to generate the moving target foreground mask, then make up the dynamic background of real-time update according to the space distribution information of this foreground mask in present frame, obtain including at last the background subtraction figure of sport foreground target with the background subtraction method;
3) probability density of each pixel grey scale among the poor figure of frame and the background subtraction figure employing statistics with histogram step 2), be exactly required self-adaptation segmentation threshold during when the probability density of a certain pixel grayscale with greater than threshold value 2 φ (k)-1, gray-scale value is judged to foreground pixel greater than the pixel of this threshold value, otherwise is background pixel.
Described step 1) method for registering in is as follows: the CenSurE unique point of adjacent two frames is also with U-SURF generating feature point descriptor before and after at first extracting, then with Euclidean distance as the similarity measurement of feature and adopt the feature point set of adjacent two frames of tagsort strategy Rapid matching, filter out the part exterior point by the random sampling unification algorism again and obtain that Background matching point is right accurately, utilize at last least square method to calculate accurate interframe homography matrix, according to this homography matrix former frame is carried out the registration frame that conversion obtains former frame.
Described step 2) the background subtraction figure production method that includes the foreground moving target in is as follows: at first the present frame of motion image sequence and the registration frame of above-mentioned former frame are made the poor poor figure of frame that obtains, then with the poor figure self-adaptation of this frame binary segmentation, with the profile detection method detect the movement destination image piece and with minimum external matrix with this region labeling, thereby obtained comprising the foreground mask in moving target maximum possible zone in time domain; Then get the first frame of sequence as the initial background frame, and in real time with foreground mask zone corresponding in the background frames with former frame through step 1) corresponding region of the registration frame that obtains is alternative, other zones of background frames are upgraded with the current sequence corresponding region, thereby the real-time background image that obtains dynamically updating obtains comprising the background subtraction figure of foreground moving target at last with the background subtraction method.
Described step 3) dividing method of self-adaptation segmentation threshold is as follows in: to step 2) in the poor figure of frame and background subtraction figure, the difference of each pixel and all pixel averages of this frame size on the statistical graph, if this difference less than a certain threshold value, then is judged to background pixel, otherwise it is foreground pixel; According to the normal distribution law of stochastic variable, adding up each pixel grayscale distribution probability again, if this distribution probability greater than a certain threshold value, then is judged to foreground pixel, otherwise is background pixel, and gray level corresponding to this pixel is the self-adaptation segmentation threshold.
The moving target method for quick can reach the processing speed of 15 frame/seconds in the dynamic scene provided by the invention, and when guaranteeing detection speed, can also obtain more complete moving target, therefore can substantially satisfy the requirement of the indexs such as rapidity, noise immunity, illumination adaptability and target integrality of moving object detection in the dynamic scene.
Description of drawings
Fig. 1 is moving target method for quick process flow diagram in the dynamic scene provided by the invention.
Fig. 2 a-Fig. 2 d is respectively adjacent two two field pictures in the Coastguard standard test sequences, utilizes difference phase multiplication to the moving object detection result of above-mentioned image and utilizes the inventive method to the moving object detection result of above-mentioned image.
Fig. 3 a-Fig. 3 d is respectively adjacent two two field pictures in the Stefan standard test sequences, utilizes difference phase multiplication to the moving object detection result of above-mentioned image and utilizes the inventive method to the moving object detection result of above-mentioned image.
Fig. 4 a-Fig. 4 d is respectively adjacent two two field pictures in the Indoor Robot standard test sequences, utilizes difference phase multiplication to the moving object detection result of above-mentioned image and utilizes the inventive method to the moving object detection result of above-mentioned image.
Fig. 5 utilizes the OTSU method respectively moving object detection result in Fig. 2 b, Fig. 3 b and Fig. 4 b image to be carried out the result of prospect binary segmentation.
Fig. 6 utilizes the inventive method respectively moving object detection result in Fig. 2 b, Fig. 3 b and Fig. 4 b image to be carried out the result of prospect binary segmentation.
Embodiment
Below in conjunction with the drawings and specific embodiments moving target method for quick in the dynamic scene provided by the invention is elaborated.
As shown in Figure 1, the moving target method for quick comprises the following step that carries out in order in the dynamic scene provided by the invention:
1) at first, characteristics according to CenSurE feature point extraction rapidity and accuracy, use this unique point and homography transformation model quickly and accurately motion image sequence interframe to be carried out registration, thereby compensate translation, the Rotation and Zoom amount of the interframe background that causes because of camera motion, to obtain the registration frame of former frame;
Described method for registering is as follows: the CenSurE unique point of adjacent two frames is also with U-SURF generating feature point descriptor before and after at first extracting, then with Euclidean distance as the similarity measurement of feature and adopt the feature point set of adjacent two frames of tagsort strategy Rapid matching, filter out the part exterior point by random sampling unification algorism (RANSAC) again and obtain that Background matching point is right accurately, utilize at last least square method to calculate accurate interframe homography matrix, the registration frame that former frame is resampled and obtains present frame according to this homography matrix.
The registration key is to calculate the interframe transformation relation of motion image sequence, the background motion that then causes owing to camera motion by this conversion compensation.The plane homography is defined as the projection mapping from a plane to another plane, and homography matrix gets up source images Plane-point collection position and target image Plane-point collection location association.
In Practical Project, usually only has the among a small circle variation of several pixel distances between adjacent two frames, dynamic scene slowly changes and can not undergo mutation, therefore find the solution that the required feature point extraction algorithm of homography matrix needs real-time better and translation, the Rotation and Zoom of small scale changed and have good unchangeability, also have adaptability for illumination, noise and visual angle change to a certain degree, CenSurE can satisfy above-mentioned requirements preferably.
CenSurE is the high local invariant feature of a kind of counting yield, main thought is at first to utilize double-deck Gauss's Laplace filter to make up metric space, with the center ring of each pixel of integral image speed-up computation around Ha Er small echo response, then adopt non-maximum value inhibition method to detect local extremum, the less point of instability with being distributed on edge or the line of last filtering response.
Consider that the angular deviation before adjacent two frames is also little, the U-SURF feature of mentioning in the SURF algorithm that the people such as Bay propose is described the requirement that just can satisfy well unique point robustness in the small angle rotation situation, and arithmetic speed is very fast.U-SURF Feature Descriptor generative process is as follows: make up successively 20s * 20s window (s is the yardstick of this unique point) centered by the CenSurE unique point, this window area is divided into 16 sub regions, and the Ha Er small echo that calculates respectively on x and the y direction take s as sampling step length in the subregion of 5s * 5s responds d xAnd d yAnd compose respectively with different weight coefficients, then use four-dimensional vectorial V=(∑ d x, ∑ d y, ∑ | d x|, ∑ | d y|) this subregion is described, 16 sub regions are done the feature description vectors that same computing just can obtain one 64 dimension, remove illumination to the impact of descriptor with normalization at last.
Adopt Euclidean distance as the similarity measurement of proper vector, for any unique point in the feature point set of present frame, in former frame feature point set subject to registration, find out nearest and inferior two the near unique points of Euclidean distance, if minimum distance and inferior closely than satisfied:
d Recently/ d Inferior near<T (1)
Think that then two nearest Feature Points Matching are successful.Consider that the CenSurE unique point has maximum value and minimal value two classes, the present invention is classified it, has improved matching speed, has also improved the accuracy rate of coupling simultaneously.Matching double points was concentrated and may be also had some to come from the sport foreground target or have minority Mismatching point pair this moment, with random sampling unification algorism (RANSAC) with its filtering.
The plane homography is defined as the projection mapping from a plane to another plane, and it gets up homography matrix with the position of previous frame feature point set subject to registration and the location association of present frame feature point set:
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 = h 1 T h 2 T h 3 T - - - ( 2 )
Wherein
Figure BDA00003315747600072
The plane homography matrix only has 8 degree of freedom, makes h 33=1.
Suppose p=(x, y, l) TAnd q=(u, v, 1) TBe the homogeneous coordinates of matching double points, then by homography matrix p transformed to q:
q=Hp (3)
Wherein H comprises the variation such as translation, Rotation and Zoom of adjacent two interframe.
Following formula is launched to obtain:
u v 1 = h 1 T p h 2 T p h 3 T p = h 1 T p / h 3 T p h 2 T p / h 3 T p 1 , That is: h 1 T p - u ( h 3 T p ) = 0 h 2 T p - v ( h 3 T p ) = 0 , Will h i T = ( h i 1 , h i 2 , h i 3 ) T The substitution arrangement obtains about h IjEight yuan of linear function groups:
h 11 x + h 12 y + h 13 - u h 31 x - u h 32 y = u h 21 x + h 22 y + h 23 - v h 31 x - v h 32 y = v - - - ( 4 )
From formula (4) as can be known, the plane homography matrix that calculates in theory 8 degree of freedom only needs 4 matching double points.In order to obtain more accurate and transformation parameter robust, extract more matching double points in the background area, ask optimal transform matrix by least square method.With matrix representation suc as formula shown in (5).
AX=B (5)
Wherein,
A 2 n × 8 = x 1 y 1 1 0 0 0 - x 1 u 1 - y 1 u 1 · · · · · · · · · · · · · · · · · · · · · · · · x 1 y 1 1 - x 1 v 1 - y 1 v 1 2 n × 8 ,
X 8 * 1=(h 11h 12h 13h 21h 22h 23h 31h 32) T, B 2n * 1=(x 1... u 1...) 2n * 1 T, (x i, y i) and (u i, v i) be respectively the background characteristics point that mates in former frame and the present frame to coordinate, n 〉=4.
Utilize the interframe homography matrix, previous frame is mapped on the registration frame, the locational grey scale pixel value of non-integer is obtained by bilinear interpolation, the variations such as background rotation, convergent-divergent and translation that compensation causes owing to camera motion.The affine Transform Model of six parameters also can be described the linear transformations such as the translation, Rotation and Zoom of plane picture, the overall motion estimation that is used for background under the motion cameras, but this model can only carry out parallel mapping to plane picture, this just needs target scene distance video camera enough far away, thereby makes the target scene can be considered a plane.In fact, affined transformation can be understood as homography matrix element h in the homography conversion of plane 31=h 32A kind of special case of=0 o'clock, the registration model among the present invention can be described plane in the 3d space to mapping relations between the plane, have more generality than affine Transform Model.
2) through step 1) to behind the motion image sequence interframe registration, background is relatively static between adjacent two frames of sequence, the variations such as the translation of background, Rotation and Zoom in the scene that eliminated because camera motion etc. causes, so Main Differences comes from the foreground target motion in the scene.
The method that the present invention utilizes space time information to combine is extracted comparatively entire motion target, general thought is as follows: at first on sequential, the registration frame of present frame and above-mentioned former frame is made the poor poor figure of frame that obtains to generate the moving target foreground mask, then make up the dynamic background of real-time update according to the space distribution information of this foreground mask in present frame, obtain including at last the background subtraction figure of sport foreground target with the background subtraction method;
If f (x, y, t) is the t frame of motion image sequence, f ' (x, y, t-1) is the registration result of t-1 two field picture during as the reference frame with sequence t frame, obtains the poor figure of frame with frame difference method as follows:
dif(x,y,t)=|f(x,y,t)-f′(x,y,t-1)| (6)
With the poor figure binary segmentation of above-mentioned frame:
difB = ( x , y , t ) = 1 , dif ( x , y , t ) &GreaterEqual; Th 1 0 , dif ( x , y , t ) < Th 1 - - - ( 7 )
Th wherein 1Be the self-adaptation segmentation threshold of prospect and the background of the poor figure of frame, step 3) introduced definite method of this self-adaptation segmentation threshold.
Detect moving target piece among the poor figure of frame of binaryzation with the profile detection method, remove the less noise piece of area, the moving target piece is demarcated with minimum external matrix algorithm and should the zone in grey scale pixel value be made as 1, the gray-scale value of other pixel then sets to 0.Obtain thus time domain moving target foreground mask M (x, y, t):
Figure BDA00003315747600091
It has comprised the Probability Area of moving target maximum.
The background subtraction method can be extracted more complete sport foreground target than frame difference method, and this paper creates the dynamic background B (x, y, t) of a real-time update:
(1) at first gets the first frame of motion image sequence as the first frame background B (x, y, t)=f (x, y, t), t=1.
(2) according to the space distribution information of foreground mask, background image updating in real time, the context update principle is: the sport foreground masked areas with former frame through step 1) background area of the registration frame that obtains substitutes, other zones are upgraded with current motion image sequence frame, that is:
Figure BDA00003315747600092
Wherein B ' (x, y, t-1) expression t-1 sequence image constantly with present frame as the reference frame through step 1) image behind the registration:
B′(x,y,t-1)=T(B(x,y,t-1)) (10)
T () represents step 1) in the former frame mentioned and the homography conversion between the present frame,
Figure BDA00003315747600096
Be the context update rate factor,
Figure BDA00003315747600097
At last, obtain including the background subtraction figure of sport foreground target with the background subtraction method:
Dif(x,y,t)=f(x,y,t)-B(x,y,t) (11)
Binary segmentation above-mentioned background subduction figure obtains the sport foreground target:
F ( x , y , t ) = 255 , Dif ( x , y , t ) &GreaterEqual; Th 2 0 , Dif ( x , y , t ) < Th 2 - - - ( 12 )
Th wherein 2Be the self-adaptation segmentation threshold of prospect and the background of background subtraction figure, step 3) introduced definite method of this self-adaptation segmentation threshold.
3) in order to make step 2) in the self-adaptation segmentation threshold of prospect and background can the self-adaptation scene variation, the last method that proposes a kind of Based on Probability statistics of the present invention is calculated the self-adaptation segmentation threshold of prospect and background, to realize quick and precisely cutting apart of foreground target.
The self-adaptation segmentation threshold of background and prospect can not be too little, otherwise can introduce too much noise, can not be too large, otherwise can undetected a lot of moving targets foreground point.The Otsu algorithm is determined segmentation threshold according to the maximum between-cluster variance principle, but does not have the poor figure of frame and the background subtraction figure of obvious peak and paddy for grey level histogram, and the binary segmentation effect is bad.
The present invention is based on background dot in the mixed Gaussian background modeling algorithm | X-μ | the differentiation thought of≤2.5 σ, take full advantage of the characteristics of gray scale normal distribution among the poor figure of frame and the background subtraction figure, a kind of quick self-adapted segmentation threshold computing method of probabilistic method are proposed.Specifically, the present invention adopts statistics with histogram step 2) in the poor figure of frame and background subtraction figure in the probability density of each pixel grey scale, during when the probability density of a certain pixel grayscale with greater than 2 φ (k)-1, with this gray level as the self-adaptation segmentation threshold, gray-scale value is judged to foreground pixel greater than the pixel of this threshold value, otherwise is background pixel.
To step 2) in the poor figure of frame and background subtraction figure, add up the difference size of each pixel and all pixel averages of this frame, if this difference less than certain threshold value, then is judged to background pixel, otherwise be foreground pixel:
Figure BDA00003315747600101
And have according to the normal distribution law of stochastic variable:
P{|d(x,y,t)-u t|<kδ t}
=P{-kδ t<d(x,y,t)-u t<kδ t}
=P{u t-kδ t<d(x,y,t)<u t+kδ t}
=φ(k)-φ(-k)
=2φ(k)-1 (14)
φ () expression Standard Normal Distribution wherein, formula (14) illustrates for step 2) in the poor figure of frame and background subtraction figure in each pixel, when its gray level then is foreground pixel during greater than 2 φ (k)-1, otherwise be background pixel.The self-adaptation segmentation threshold method of prospect of the present invention and background does not need explicitly to calculate every frame pixel concrete average and variance, has greatly simplified computing, has simple efficiently characteristics.
In order to verify effect of the present invention, the inventor has provided and has been configured to Pentium (R) Dual-Core2.70GHz CPU, on the PC of 2GB RAM, use Visual Studio2010 Integrated Development Environment and OpenCV2.3.1 to the standard test sequences under the motion cameras: 1) size is 352 * 288 Coastguard standard test sequences; 2) size is 352 * 288 Stefan standard test sequences; 3) size be 320 * 240 Indoor Robot (Robots) with clap sequence ( Http:// www.ces.clemson.edu/~stb/images/) result that tests, such as Fig. 2-shown in Figure 6.
The present invention takes full advantage of the CenSurE feature for the robustness of the variations such as convergent-divergent, rotation and translation of small scale and the accuracy of characteristic point position, guaranteed the accuracy that the interframe transformation parameter calculates, the homography transformation model more is applicable to the registration of interframe background in the general camera motion scene than affine variation model.
The high efficiency of the probabilistic method foreground segmentation that the high counting yield of CenSurE operator and U-SURF descriptor rapidity and the present invention adopt, so that the inventive method has faster travelling speed, the experimental result of cycle tests reaches 15 frames/s, than the difference phase multiplication algorithm based on the SIFT characteristic matching, processing speed has improved nearly 10 times, shown in following table 1.
The comparison consuming time of table 1 distinct methods
Figure BDA00003315747600111
Comparison diagram 2c and Fig. 2 d, Fig. 3 c and Fig. 3 d, Fig. 4 c and Fig. 4 d can find out, the present invention is when guaranteeing target detection speed, adopt temporal and spatial correlations algorithm ratio phase-splitting multiplication algorithm can detect more sport foreground pixel, the target that detects is more complete, greatly reduces the undetected risk of target.
Comparison diagram 5 and two kinds of foreground segmentation methods of Fig. 6 can be found out at last, and the prospect of OTSU method and background segment result can introduce too much noise, and Based on Probability statistical method of the present invention can be partitioned into more exactly foreground pixel.
As a whole, the present invention has taken into account the requirement of moving object detection real-time and integrality in the dynamic scene simultaneously, in the situation that it is consuming time to reduce algorithm, improve simultaneously target detection result's integrality, be specially adapted to the detection of moving target under the slow condition of moving of video camera in the complex scene.

Claims (4)

1. moving target method for quick in the dynamic scene, it is characterized in that: described moving target method for quick comprises the following step that carries out in order:
1) uses CenSurE unique point and homography transformation model quickly and accurately motion image sequence interframe to be carried out registration, thereby compensate translation, the Rotation and Zoom amount of the interframe background that causes because of camera motion, to obtain the registration frame of former frame;
2) on sequential, the registration frame of present frame and above-mentioned former frame is made the poor poor figure of frame that obtains to generate the moving target foreground mask, then make up the dynamic background of real-time update according to the space distribution information of this foreground mask in present frame, obtain including at last the background subtraction figure of sport foreground target with the background subtraction method;
3) probability density of each pixel grey scale among the poor figure of frame and the background subtraction figure employing statistics with histogram step 2), be exactly required self-adaptation segmentation threshold during when the probability density of a certain pixel grayscale with greater than threshold value 2 φ (k)-1, gray-scale value is judged to foreground pixel greater than the pixel of this threshold value, otherwise is background pixel.
2. moving target method for quick in the dynamic scene according to claim 1, it is characterized in that: the method for registering described step 1) is as follows: the CenSurE unique point of adjacent two frames is also with U-SURF generating feature point descriptor before and after at first extracting, then with Euclidean distance as the similarity measurement of feature and adopt the feature point set of adjacent two frames of tagsort strategy Rapid matching, filter out the part exterior point by the random sampling unification algorism again and obtain that Background matching point is right accurately, utilize at last least square method to calculate accurate interframe homography matrix, according to this homography matrix former frame is carried out the registration frame that conversion obtains former frame.
3. moving target method for quick in the dynamic scene according to claim 1, it is characterized in that: the background subtraction figure production method that includes the foreground moving target described step 2) is as follows: at first the present frame of motion image sequence and the registration frame of above-mentioned former frame are made the poor poor figure of frame that obtains, then with the poor figure self-adaptation of this frame binary segmentation, with the profile detection method detect the movement destination image piece and with minimum external matrix with this region labeling, thereby obtained comprising the foreground mask in moving target maximum possible zone in time domain; Then get the first frame of sequence as the initial background frame, and in real time with foreground mask zone corresponding in the background frames with former frame through step 1) corresponding region of the registration frame that obtains is alternative, other zones of background frames are upgraded with the current sequence corresponding region, thereby the real-time background image that obtains dynamically updating obtains comprising the background subtraction figure of foreground moving target at last with the background subtraction method.
4. moving target method for quick in the dynamic scene according to claim 1, it is characterized in that: the dividing method of self-adaptation segmentation threshold is as follows described step 3): to step 2) in the poor figure of frame and background subtraction figure, the difference of each pixel and all pixel averages of this frame size on the statistical graph, if this difference is less than a certain threshold value, then be judged to background pixel, otherwise be foreground pixel; According to the normal distribution law of stochastic variable, adding up each pixel grayscale distribution probability again, if this distribution probability greater than a certain threshold value, then is judged to foreground pixel, otherwise is background pixel, and gray level corresponding to this pixel is the self-adaptation segmentation threshold.
CN201310222645.5A 2013-06-07 2013-06-07 Moving target method for quick in dynamic scene Expired - Fee Related CN103325112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310222645.5A CN103325112B (en) 2013-06-07 2013-06-07 Moving target method for quick in dynamic scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310222645.5A CN103325112B (en) 2013-06-07 2013-06-07 Moving target method for quick in dynamic scene

Publications (2)

Publication Number Publication Date
CN103325112A true CN103325112A (en) 2013-09-25
CN103325112B CN103325112B (en) 2016-03-23

Family

ID=49193835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310222645.5A Expired - Fee Related CN103325112B (en) 2013-06-07 2013-06-07 Moving target method for quick in dynamic scene

Country Status (1)

Country Link
CN (1) CN103325112B (en)

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729857A (en) * 2013-12-09 2014-04-16 南京理工大学 Moving target detection method under movable camera based on secondary compensation
CN103942813A (en) * 2014-03-21 2014-07-23 杭州电子科技大学 Single-moving-object real-time detection method in electric wheelchair movement process
CN104036245A (en) * 2014-06-10 2014-09-10 电子科技大学 Biometric feature recognition method based on on-line feature point matching
CN104567815A (en) * 2014-12-26 2015-04-29 北京航天控制仪器研究所 Image-matching-based automatic reconnaissance system of unmanned aerial vehicle mounted photoelectric stabilization platform
CN104574358A (en) * 2013-10-21 2015-04-29 诺基亚公司 Method and apparatus for scene segmentation from focal stack images
CN105469421A (en) * 2014-09-04 2016-04-06 南京理工大学 Method based on panoramic system for achieving monitoring of ground moving target
CN105551033A (en) * 2015-12-09 2016-05-04 广州视源电子科技股份有限公司 Element marking method, system and device
CN105741315A (en) * 2015-12-30 2016-07-06 电子科技大学 Downsampling strategy-based statistical background deduction method
CN106127801A (en) * 2016-06-16 2016-11-16 乐视控股(北京)有限公司 A kind of method and apparatus of moving region detection
CN106227216A (en) * 2016-08-31 2016-12-14 朱明� Home-services robot towards house old man
CN106331625A (en) * 2016-08-30 2017-01-11 天津天地伟业数码科技有限公司 Indoor single human body target PTZ tracking method
CN106447694A (en) * 2016-07-28 2017-02-22 上海体育科学研究所 Video badminton motion detection and tracking method
CN106651902A (en) * 2015-11-02 2017-05-10 李嘉禾 Building intelligent early warning method and system
CN106651903A (en) * 2016-12-30 2017-05-10 明见(厦门)技术有限公司 Moving object detection method
CN106780541A (en) * 2016-12-28 2017-05-31 南京师范大学 A kind of improved background subtraction method
CN106875415A (en) * 2016-12-29 2017-06-20 北京理工雷科电子信息技术有限公司 The continuous-stable tracking of small and weak moving-target in a kind of dynamic background
CN107292910A (en) * 2016-04-12 2017-10-24 南京理工大学 Moving target detecting method under a kind of mobile camera based on pixel modeling
CN107316313A (en) * 2016-04-15 2017-11-03 株式会社理光 Scene Segmentation and equipment
CN107563961A (en) * 2017-09-01 2018-01-09 首都师范大学 A kind of system and method for the moving-target detection based on camera sensor
CN107911663A (en) * 2017-11-27 2018-04-13 江苏理工学院 A kind of elevator passenger hazardous act intelligent recognition early warning system based on Computer Vision Detection
CN108109163A (en) * 2017-12-18 2018-06-01 中国科学院长春光学精密机械与物理研究所 A kind of moving target detecting method for video of taking photo by plane
CN108154520A (en) * 2017-12-25 2018-06-12 北京航空航天大学 A kind of moving target detecting method based on light stream and frame matching
CN108196285A (en) * 2017-11-30 2018-06-22 中山大学 A kind of Precise Position System based on Multi-sensor Fusion
CN108305267A (en) * 2018-02-14 2018-07-20 北京市商汤科技开发有限公司 Method for segmenting objects, device, equipment, storage medium and program
CN108830834A (en) * 2018-05-23 2018-11-16 重庆交通大学 A kind of cable-climbing robot video artefacts information automation extraction method
CN108846844A (en) * 2018-04-13 2018-11-20 上海大学 A kind of sea-surface target detection method based on sea horizon
CN109035257A (en) * 2018-07-02 2018-12-18 百度在线网络技术(北京)有限公司 portrait dividing method, device and equipment
CN109085658A (en) * 2018-07-09 2018-12-25 宁波大学 A kind of indoor human body sensing device
CN109389618A (en) * 2017-08-04 2019-02-26 列日大学 Foreground and background detection method
CN109632590A (en) * 2019-01-08 2019-04-16 上海大学 A kind of luminous planktonic organism detection method in deep-sea
WO2019080685A1 (en) * 2017-10-24 2019-05-02 北京京东尚科信息技术有限公司 Video image segmentation method and apparatus, storage medium and electronic device
CN109782811A (en) * 2019-02-02 2019-05-21 绥化学院 A kind of automatic tracing control system and method for unmanned model car
CN110033455A (en) * 2018-01-11 2019-07-19 上海交通大学 A method of extracting information on target object from video
CN110163831A (en) * 2019-04-19 2019-08-23 深圳市思为软件技术有限公司 The object Dynamic Display method, apparatus and terminal device of three-dimensional sand table
CN110349189A (en) * 2019-05-31 2019-10-18 广州铁路职业技术学院(广州铁路机械学校) A kind of background image update method based on continuous inter-frame difference
CN110956219A (en) * 2019-12-09 2020-04-03 北京迈格威科技有限公司 Video data processing method and device and electronic system
CN111260695A (en) * 2020-01-17 2020-06-09 桂林理工大学 Throw-away sundry identification algorithm, system, server and medium
CN111612811A (en) * 2020-06-05 2020-09-01 中国人民解放军军事科学院国防科技创新研究院 Video foreground information extraction method and system
CN113409353A (en) * 2021-06-04 2021-09-17 杭州联吉技术有限公司 Motion foreground detection method and device, terminal equipment and storage medium
CN113408669A (en) * 2021-07-30 2021-09-17 浙江大华技术股份有限公司 Image determination method and device, storage medium and electronic device
CN113456027A (en) * 2021-06-24 2021-10-01 南京润楠医疗电子研究院有限公司 Sleep parameter evaluation method based on wireless signals
CN114077877A (en) * 2022-01-19 2022-02-22 人民中科(济南)智能技术有限公司 Newly added garbage identification method and device, computer equipment and storage medium
CN114581482A (en) * 2022-03-09 2022-06-03 湖南中科助英智能科技研究院有限公司 Moving target detection method and device under moving platform and detection equipment
US11443446B2 (en) 2019-09-30 2022-09-13 Tata Consultancy Services Limited Method and system for determining dynamism in a scene by processing depth image
CN116030367A (en) * 2023-03-27 2023-04-28 山东智航智能装备有限公司 Unmanned aerial vehicle viewing angle moving target detection method and device
CN116188534A (en) * 2023-05-04 2023-05-30 广东工业大学 Indoor real-time human body tracking method, storage medium and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022505A (en) * 2007-03-23 2007-08-22 中国科学院光电技术研究所 Mobile target in complex background automatic testing method and device
CN103123726A (en) * 2012-09-07 2013-05-29 佳都新太科技股份有限公司 Target tracking algorithm based on movement behavior analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022505A (en) * 2007-03-23 2007-08-22 中国科学院光电技术研究所 Mobile target in complex background automatic testing method and device
CN103123726A (en) * 2012-09-07 2013-05-29 佳都新太科技股份有限公司 Target tracking algorithm based on movement behavior analysis

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MOTILAL AGRAWAL ET AL: "《CenSurE:Center Surround Extremas for Realtime Feature Detection and Matching》", 《COMPUTER VISION-ECCV 2008》, vol. 5305, 31 December 2008 (2008-12-31), pages 102 - 115 *
江文斌: "《基于视觉的运动目标检测与跟踪算法研究》", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 9, 15 September 2012 (2012-09-15), pages 138 - 506 *
管涛等: "《基于全局单应性变换的虚拟注册方法》", 《华中科技大学学报(自然科学版)》, vol. 35, no. 4, 30 April 2007 (2007-04-30), pages 100 - 102 *

Cited By (72)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574358A (en) * 2013-10-21 2015-04-29 诺基亚公司 Method and apparatus for scene segmentation from focal stack images
CN104574358B (en) * 2013-10-21 2018-10-12 诺基亚技术有限公司 From the method and apparatus for focusing heap image progress scene cut
CN103729857A (en) * 2013-12-09 2014-04-16 南京理工大学 Moving target detection method under movable camera based on secondary compensation
CN103729857B (en) * 2013-12-09 2016-12-07 南京理工大学 Moving target detecting method under mobile camera based on second compensation
CN103942813A (en) * 2014-03-21 2014-07-23 杭州电子科技大学 Single-moving-object real-time detection method in electric wheelchair movement process
CN104036245B (en) * 2014-06-10 2018-04-06 电子科技大学 A kind of biological feather recognition method based on online Feature Points Matching
CN104036245A (en) * 2014-06-10 2014-09-10 电子科技大学 Biometric feature recognition method based on on-line feature point matching
CN105469421A (en) * 2014-09-04 2016-04-06 南京理工大学 Method based on panoramic system for achieving monitoring of ground moving target
CN104567815A (en) * 2014-12-26 2015-04-29 北京航天控制仪器研究所 Image-matching-based automatic reconnaissance system of unmanned aerial vehicle mounted photoelectric stabilization platform
CN106651902A (en) * 2015-11-02 2017-05-10 李嘉禾 Building intelligent early warning method and system
CN105551033A (en) * 2015-12-09 2016-05-04 广州视源电子科技股份有限公司 Element marking method, system and device
CN105551033B (en) * 2015-12-09 2019-11-26 广州视源电子科技股份有限公司 Component labelling mthods, systems and devices
CN105741315B (en) * 2015-12-30 2019-04-02 电子科技大学 A kind of statistics background subtraction method based on down-sampled strategy
CN105741315A (en) * 2015-12-30 2016-07-06 电子科技大学 Downsampling strategy-based statistical background deduction method
CN107292910B (en) * 2016-04-12 2020-08-11 南京理工大学 Moving target detection method under mobile camera based on pixel modeling
CN107292910A (en) * 2016-04-12 2017-10-24 南京理工大学 Moving target detecting method under a kind of mobile camera based on pixel modeling
CN107316313A (en) * 2016-04-15 2017-11-03 株式会社理光 Scene Segmentation and equipment
CN107316313B (en) * 2016-04-15 2020-12-11 株式会社理光 Scene segmentation method and device
CN106127801A (en) * 2016-06-16 2016-11-16 乐视控股(北京)有限公司 A kind of method and apparatus of moving region detection
CN106447694A (en) * 2016-07-28 2017-02-22 上海体育科学研究所 Video badminton motion detection and tracking method
CN106331625A (en) * 2016-08-30 2017-01-11 天津天地伟业数码科技有限公司 Indoor single human body target PTZ tracking method
CN106227216A (en) * 2016-08-31 2016-12-14 朱明� Home-services robot towards house old man
CN106780541A (en) * 2016-12-28 2017-05-31 南京师范大学 A kind of improved background subtraction method
CN106780541B (en) * 2016-12-28 2019-06-14 南京师范大学 A kind of improved background subtraction method
CN106875415A (en) * 2016-12-29 2017-06-20 北京理工雷科电子信息技术有限公司 The continuous-stable tracking of small and weak moving-target in a kind of dynamic background
CN106875415B (en) * 2016-12-29 2020-06-02 北京理工雷科电子信息技术有限公司 Continuous and stable tracking method for small and weak moving targets in dynamic background
CN106651903A (en) * 2016-12-30 2017-05-10 明见(厦门)技术有限公司 Moving object detection method
CN109389618B (en) * 2017-08-04 2022-03-01 列日大学 Foreground and background detection method
CN109389618A (en) * 2017-08-04 2019-02-26 列日大学 Foreground and background detection method
CN107563961A (en) * 2017-09-01 2018-01-09 首都师范大学 A kind of system and method for the moving-target detection based on camera sensor
WO2019080685A1 (en) * 2017-10-24 2019-05-02 北京京东尚科信息技术有限公司 Video image segmentation method and apparatus, storage medium and electronic device
US11227393B2 (en) 2017-10-24 2022-01-18 Beijing Jingdong Shangke Information Technology Co., Ltd. Video image segmentation method and apparatus, storage medium and electronic device
CN107911663A (en) * 2017-11-27 2018-04-13 江苏理工学院 A kind of elevator passenger hazardous act intelligent recognition early warning system based on Computer Vision Detection
CN108196285A (en) * 2017-11-30 2018-06-22 中山大学 A kind of Precise Position System based on Multi-sensor Fusion
CN108196285B (en) * 2017-11-30 2021-12-17 中山大学 Accurate positioning system based on multi-sensor fusion
CN108109163A (en) * 2017-12-18 2018-06-01 中国科学院长春光学精密机械与物理研究所 A kind of moving target detecting method for video of taking photo by plane
CN108154520A (en) * 2017-12-25 2018-06-12 北京航空航天大学 A kind of moving target detecting method based on light stream and frame matching
CN108154520B (en) * 2017-12-25 2019-01-08 北京航空航天大学 A kind of moving target detecting method based on light stream and frame matching
CN110033455B (en) * 2018-01-11 2023-01-03 上海交通大学 Method for extracting target object information from video
CN110033455A (en) * 2018-01-11 2019-07-19 上海交通大学 A method of extracting information on target object from video
CN108305267A (en) * 2018-02-14 2018-07-20 北京市商汤科技开发有限公司 Method for segmenting objects, device, equipment, storage medium and program
CN108305267B (en) * 2018-02-14 2020-08-11 北京市商汤科技开发有限公司 Object segmentation method, device, apparatus, storage medium, and program
CN108846844A (en) * 2018-04-13 2018-11-20 上海大学 A kind of sea-surface target detection method based on sea horizon
CN108846844B (en) * 2018-04-13 2022-02-08 上海大学 Sea surface target detection method based on sea antenna
CN108830834A (en) * 2018-05-23 2018-11-16 重庆交通大学 A kind of cable-climbing robot video artefacts information automation extraction method
CN109035257B (en) * 2018-07-02 2021-08-31 百度在线网络技术(北京)有限公司 Portrait segmentation method, device and equipment
CN109035257A (en) * 2018-07-02 2018-12-18 百度在线网络技术(北京)有限公司 portrait dividing method, device and equipment
CN109085658B (en) * 2018-07-09 2019-11-15 宁波大学 A kind of indoor human body sensing device
CN109085658A (en) * 2018-07-09 2018-12-25 宁波大学 A kind of indoor human body sensing device
CN109632590B (en) * 2019-01-08 2020-04-17 上海大学 Deep-sea luminous plankton detection method
CN109632590A (en) * 2019-01-08 2019-04-16 上海大学 A kind of luminous planktonic organism detection method in deep-sea
CN109782811B (en) * 2019-02-02 2021-10-08 绥化学院 Automatic following control system and method for unmanned model vehicle
CN109782811A (en) * 2019-02-02 2019-05-21 绥化学院 A kind of automatic tracing control system and method for unmanned model car
CN110163831A (en) * 2019-04-19 2019-08-23 深圳市思为软件技术有限公司 The object Dynamic Display method, apparatus and terminal device of three-dimensional sand table
CN110349189A (en) * 2019-05-31 2019-10-18 广州铁路职业技术学院(广州铁路机械学校) A kind of background image update method based on continuous inter-frame difference
US11443446B2 (en) 2019-09-30 2022-09-13 Tata Consultancy Services Limited Method and system for determining dynamism in a scene by processing depth image
CN110956219A (en) * 2019-12-09 2020-04-03 北京迈格威科技有限公司 Video data processing method and device and electronic system
CN110956219B (en) * 2019-12-09 2023-11-14 爱芯元智半导体(宁波)有限公司 Video data processing method, device and electronic system
CN111260695A (en) * 2020-01-17 2020-06-09 桂林理工大学 Throw-away sundry identification algorithm, system, server and medium
CN111612811B (en) * 2020-06-05 2021-02-19 中国人民解放军军事科学院国防科技创新研究院 Video foreground information extraction method and system
CN111612811A (en) * 2020-06-05 2020-09-01 中国人民解放军军事科学院国防科技创新研究院 Video foreground information extraction method and system
CN113409353A (en) * 2021-06-04 2021-09-17 杭州联吉技术有限公司 Motion foreground detection method and device, terminal equipment and storage medium
CN113456027A (en) * 2021-06-24 2021-10-01 南京润楠医疗电子研究院有限公司 Sleep parameter evaluation method based on wireless signals
CN113456027B (en) * 2021-06-24 2023-12-22 南京润楠医疗电子研究院有限公司 Sleep parameter assessment method based on wireless signals
CN113408669A (en) * 2021-07-30 2021-09-17 浙江大华技术股份有限公司 Image determination method and device, storage medium and electronic device
CN113408669B (en) * 2021-07-30 2023-06-16 浙江大华技术股份有限公司 Image determining method and device, storage medium and electronic device
CN114077877A (en) * 2022-01-19 2022-02-22 人民中科(济南)智能技术有限公司 Newly added garbage identification method and device, computer equipment and storage medium
CN114077877B (en) * 2022-01-19 2022-05-13 人民中科(北京)智能技术有限公司 Newly-added garbage identification method and device, computer equipment and storage medium
CN114581482A (en) * 2022-03-09 2022-06-03 湖南中科助英智能科技研究院有限公司 Moving target detection method and device under moving platform and detection equipment
CN116030367A (en) * 2023-03-27 2023-04-28 山东智航智能装备有限公司 Unmanned aerial vehicle viewing angle moving target detection method and device
CN116188534A (en) * 2023-05-04 2023-05-30 广东工业大学 Indoor real-time human body tracking method, storage medium and equipment
CN116188534B (en) * 2023-05-04 2023-08-08 广东工业大学 Indoor real-time human body tracking method, storage medium and equipment

Also Published As

Publication number Publication date
CN103325112B (en) 2016-03-23

Similar Documents

Publication Publication Date Title
CN103325112A (en) Quick detecting method for moving objects in dynamic scene
Minaeian et al. Effective and efficient detection of moving targets from a UAV’s camera
Ke et al. Multi-dimensional traffic congestion detection based on fusion of visual features and convolutional neural network
Rout A survey on object detection and tracking algorithms
CN110232330B (en) Pedestrian re-identification method based on video detection
Prokaj et al. Tracking many vehicles in wide area aerial surveillance
CN108280844B (en) Video target positioning method based on area candidate frame tracking
WO2019057197A1 (en) Visual tracking method and apparatus for moving target, electronic device and storage medium
Chandrajit et al. Multiple objects tracking in surveillance video using color and hu moments
Sun et al. Fast motion object detection algorithm using complementary depth image on an RGB-D camera
Zhang et al. New mixed adaptive detection algorithm for moving target with big data
Su et al. Real-time dynamic SLAM algorithm based on deep learning
Lu et al. Object contour tracking using multi-feature fusion based particle filter
Ali et al. Vehicle detection and tracking in UAV imagery via YOLOv3 and Kalman filter
Roy et al. A comprehensive survey on computer vision based approaches for moving object detection
Revaud et al. Robust automatic monocular vehicle speed estimation for traffic surveillance
Li et al. High-precision motion detection and tracking based on point cloud registration and radius search
Minematsu et al. Evaluation of foreground detection methodology for a moving camera
Umamaheswaran et al. Stereo vision based speed estimation for autonomous driving
Makhmalbaf et al. 2D vision tracking methods' performance comparison for 3D tracking of construction resources
Xin et al. Vehicle ego-localization based on the fusion of optical flow and feature points matching
Jianzhao et al. A fast background subtraction method using kernel density estimation for people counting
Lu et al. Custom Object Detection via Multi-Camera Self-Supervised Learning
Wang et al. Multi-object tracking in the overlapping area based on optical flows
Sujatha et al. An innovative moving object detection and tracking system by using modified region growing algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160323

Termination date: 20180607

CF01 Termination of patent right due to non-payment of annual fee