CN102456225B - Video monitoring system and moving target detecting and tracking method thereof - Google Patents

Video monitoring system and moving target detecting and tracking method thereof Download PDF

Info

Publication number
CN102456225B
CN102456225B CN201010515055.8A CN201010515055A CN102456225B CN 102456225 B CN102456225 B CN 102456225B CN 201010515055 A CN201010515055 A CN 201010515055A CN 102456225 B CN102456225 B CN 102456225B
Authority
CN
China
Prior art keywords
target
image
moving
point
moving target
Prior art date
Application number
CN201010515055.8A
Other languages
Chinese (zh)
Other versions
CN102456225A (en
Inventor
张巍
苏鹏宇
唐李卉
谢斌
周锐鹏
Original Assignee
深圳中兴力维技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳中兴力维技术有限公司 filed Critical 深圳中兴力维技术有限公司
Priority to CN201010515055.8A priority Critical patent/CN102456225B/en
Publication of CN102456225A publication Critical patent/CN102456225A/en
Application granted granted Critical
Publication of CN102456225B publication Critical patent/CN102456225B/en

Links

Abstract

The invention discloses a video monitoring system and a moving target detecting and tracking method thereof. The method is applied to the video monitoring system on the condition that a camera is in a moving state. The method comprises the following steps: a moving target detection step, in which combination of a Harris corner and an HOG descriptor is employed as an image feature as well as processing including image feature extraction, image projection conversion, and image subtraction operation and the like is carried out to obtain a plurality of moving targets; a moving target model establishment step, in which moving target models are established for all the moving targets as well as a combination mode of a Harris corner, an extreme point and an HOG descriptor is employed to express features of the moving targets; and a target tracking step, in which target tracking is carried out according to the moving target models. According to the invention, corner feature matching is employed to reduce the calculated amount and precision of moving target detection is ensured; a corner and an extreme point are employed as target features, so that target identification has high robustness and tracking sustainability is strong; and a target motion estimation method is employed to reduce the calculated amount.

Description

A kind of moving object detection and tracking method and system

Technical field

The present invention relates to technical field of video monitoring, relate in particular to a kind of moving object detection and tracking method and system.

Background technology

In traditional intelligent video monitoring system, CCTV camera mostly is fixed cameras, background image is fixing indefinite, foreground target motion, the problem that this system exists in the time of application is: at certain several position circularly monitoring that set in advance, moving target easily exceeds monitoring visual field scope and can not be to its Continuous Tracking, and these situations make traditional intelligence video monitoring in application, be subject to severely restricts.Flying Camera machine monitoring can overcome the defect of above-mentioned traditional cameras monitoring, and be applied to vehicle-mounted monitoring, PTZ(Pan Tilt Zoom, camera pan-tilt rotates, pitching moves and lens zoom) target following, intelligent robot vision etc., application prospect is boundless, in recent years, the object detecting and tracking technology of motion cameras was subject to domestic and international academia and greatly paid close attention to.

Show according to the domestic and international pertinent literature retrieving at present, because the motion of video camera causes the variation of background, for target detection, conventionally the method adopting is: first estimate the projective transformation parameter between two continuous frames image, again a rear frame and the frame being obtained by the projective transformation of former frame are subtracted each other, obtain static background, finally utilize background subtraction separating method to obtain moving target.How accurate the key of the method is, estimate rapidly projective transformation parameter, conventional evaluation method is to utilize Image gray correlation method, SIFT(Scale Invariant Feature Transform, the conversion of yardstick invariant features) feature, SURF(Speed-Up Robust Feature, the robust features of accelerating) feature etc. asks for continuous videos image character pair point, and then utilize least square method estimated projection transformation matrix parameter, comparatively speaking, SIFT and SURF characterization method are more reliable and more stable in the time asking for images match point, but calculated amount is large simultaneously, be difficult to meet the needs of real-time analysis.

For target following, the normal method adopting is: utilize the characteristic information of CF information as tracked target.Although color characteristic is very useful feature, but when object color is more similar with background color, only utilize color tracking often more difficult, easily cause trail-and-error, some researchers have used the method that various features merges to select feature; Utilize SIFT, SURF feature is comparatively stable as the characteristic information of tracked target, to having stronger adaptability from background color and illumination variation, be the desirable target signature expression way of comparison, weak point is may be difficult to extract invariant feature point or even can not extract unique point for minority target.

Summary of the invention

The object of the present invention is to provide a kind of moving object detection and tracking method and system, overcome at present in the be kept in motion weak point of lower object detecting and tracking technology of video camera, detect quickly and accurately moving target and reliably constantly moving target followed the tracks of, reducing complicacy.

The embodiment of the present invention is achieved in that

A kind of moving object detection and tracking method, is applied to video camera in video monitoring system and, under motion state, comprises:

The step of moving object detection: two two field picture f (t-1), the f (t) to continuous acquisition also carries out Gaussian smoothing, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtain character pair point set S'; According to the S set of character pair point ', utilize generalized inverse method to try to achieve projective transformation matrix A; Former frame image f (t-1) is obtained to two field picture F according to projective transformation matrix do projective transformation; Two field picture F and a rear two field picture f (t) are done poor to obtain difference image D, then passing threshold is cut apart and is obtained bianry image B; Bianry image B is carried out to filtering operation and obtain all moving targets;

Wherein, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtain character pair point set S' and further comprise:

A21, extract respectively image f (t-1) and f (t) Harris Corner Feature and set up HOG descriptor for each Harris angle point, formation Corner Feature description vectors (x, y, d); Wherein, x and y are respectively horizontal ordinate and the ordinate of Harris angle point in image, and d is HOG descriptor;

A22, the Corner Feature vector set of image f (t-1) and f (t) is carried out to matching treatment, obtain corners Matching S set={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., m};

A23, utilize Mismatching point in RANSAC algorithm filtering corners Matching S set to obtain character pair point set S', wherein,

S'={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., n} and n≤m;

Set up the step of moving target model: for each moving target, calculate position and the size of this moving target, from current frame image, pluck out accordingly target image, calculate the HOG descriptor of this target image extreme point, Harris angle point and each unique point; The corresponding moving target information of obtaining according to detection process of moving target is set up moving target model, and the mode that adopts position, size, direction of motion, displacement, Harris angle point, extreme point and the HOG descriptor of moving target in image to combine in this model represents moving target feature; Wherein, moving target model definition is a={h, w, area, d, desc, track}, wherein, h represents target width, w represents target length, area represents target area, and d represents target circularity, and desc is the current HOG set of eigenvectors of target, and track is target trajectory set;

The step of motion target tracking: for each moving target, go out position and the size of this moving target in current frame image according to its moving target model assessment, and pluck out respective image as estimating target image f from current frame image; The eigenwert of calculating estimating target image f, comprises Harris angle point and extreme point and HOG descriptor thereof, obtains the characteristic information of estimating target image; The feature of estimating target image and former target image is mated, if the match is successful, upgrade the information of this moving target model, otherwise delete the information of this moving target model.

Preferably, in step a22, the condition of corners Matching is: for the arbitrary angle point (x in image f (t-1) i, y i, d i) with image f (t) in arbitrary angle point (x' j, y' j, d ' j), if | d i-d' j|=arg min{|d i-d ' 1|, | d i-d' 2| ..., | d i-d' n|, judge angle point (x i, y i, d i) and angle point (x' j, y' j, d' j) match.

Preferably, bianry image B being carried out to filtering operation obtains all moving targets and further comprises:

A61, bianry image B is carried out to burn into expansive working, take out interference noise point and cavity;

A62, on bianry image B, extract all moving targets, calculate respectively barycenter, length and width, target circularity, the target area information of each moving target and all moving target information is saved to moving target chained list.

Preferably, after step a62, also comprise: the pseudo-target in a63, removal moving target chained list.

Preferably, estimate position and the size of this moving target at current frame image according to moving target model information, and it plucked out from current frame image obtain estimating target image f and further comprise:

C11, position (x' according to following formula estimating motion target in current frame image i, y' i), length w' and width h',

x' i=x i-1+Δx,y' i=y i-1+Δy,h'=h+k·Δy,w'=w+k·Δx,

Wherein, Δx = 0 i ≤ 2 x i - 1 - x i - 2 else , Δy = 0 i ≤ 2 y i - 1 - y i - 2 else , K is target zoom factor, k>=2; (x i-1, y i-1) and (x i-2, y i-2) being respectively the position of moving target in front cross frame image, h and w are respectively length and the width of moving target in former frame image;

C12, according to parameter (x' i, y' i), h', w' pluck out moving target and obtain estimating target image f from current frame image.

According to another aspect of the present invention, a kind of moving object detection and tracking system providing, be applied in the video monitoring system of video camera under being kept in motion, this system comprises moving object detection module, moving target model building module and motion target tracking module, wherein:

Moving object detection module: for two two field picture f (t-1), f (t) to continuous acquisition and carry out Gaussian smoothing, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtain character pair point set S'; According to the S set of character pair point ', utilize generalized inverse method to try to achieve projective transformation matrix A; Former frame image f (t-1) is obtained to image F according to projective transformation matrix do projective transformation; Two field picture and a rear two field picture f (t) are done poor to obtain difference image D, then passing threshold is cut apart and is obtained bianry image B; Bianry image B is carried out to filtering operation and obtain all moving targets;

Wherein, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtaining character pair point set S' specifically comprises: extract respectively the Harris Corner Feature of image f (t-1) and f (t) and set up HOG descriptor for each Harris angle point, form Corner Feature description vectors (x, y, d); Wherein, x and y are respectively horizontal ordinate and the ordinate of Harris angle point in image, and d is HOG descriptor; The Corner Feature vector set of image f (t-1) and f (t) is carried out to matching treatment, obtain corners Matching S set={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., m}; Utilize the Mismatching point in RANSAC algorithm filtering corners Matching S set to obtain character pair point set S', wherein,

S'={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., n} and n≤m;

Set up moving target model module: for for each moving target, calculate position and the size of this moving target, from current frame image, pluck out accordingly target image, calculate the HOG descriptor of this target image extreme point, Harris angle point and each unique point; The corresponding moving target information of obtaining according to detection process of moving target is set up moving target model, and the mode that adopts position, size, direction of motion, displacement, Harris angle point, extreme point and the HOG descriptor of moving target in image to combine in this model represents moving target feature; Wherein, moving target model definition is a={h, w, area, d, desc, track}, wherein, h represents target width, w represents target length, area represents target area, and d represents target circularity, and desc is the current HOG set of eigenvectors of target, and track is target trajectory set;

Motion target tracking module: for for each moving target, go out position and the size of this moving target in current frame image according to its moving target model assessment, and pluck out respective image as estimating target image f from current frame image; The eigenwert of calculating estimating target image f, comprises Harris angle point and extreme point and HOG descriptor thereof, obtains the characteristic information of estimating target image; The feature of estimating target image and former target image is mated, if the match is successful, upgrade the information of this moving target model, otherwise delete the information of this moving target model.

Compared with prior art, beneficial effect is the embodiment of the present invention:

(1) in video image method for expressing, owing to having less translation and the anglec of rotation between two continuous frames image, therefore, HOG descriptor has the characteristic of translation and invariable rotary, in addition, the Corner Feature of image is more stable, reliable, and angle point extraction is quicker with respect to SIFT, the feature extraction of SURF extreme point, is more conducive to real time video image and calculates; Comprehensive above-mentioned feature, the present invention adopts Harris angle point can obtain fast the invariant feature in video image in conjunction with the method for HOG descriptor, is applicable to very much real time video image and calculates, and provide reliable guarantee for follow-up projection matrix calculates;

(2) in the method for expressing of moving target, due to the Corner Feature of target image or extreme point spy all cannot independent completion expression target signature, for example: rounder and more smooth target may be extracted a small amount of angle point or cannot extract angle point at all, possibly cannot extract extreme point feature for the uniform target of color distribution, so all likely cause target's feature-extraction failure, thus cause cannot tracking target situation.Therefore, the present invention adopts the expression mode that angle point is combined with extreme point to carry out complete expression moving target, thereby guarantee the stability of following the tracks of, although this has wherein introduced the calculating of extreme point, improve computation complexity, but because moving target is less with respect to entire image, only moving target is calculated extreme point feature and entire image do not calculated to extreme point feature, still can meet the needs of real-time processing.

(3) in motion target tracking process, due to target between two continuous frames image moving displacement and the anglec of rotation little, can think that same target exists an affined transformation in two frames of front and back, the present invention is according to this prerequisite, utilize this affine transformation matrix can accurately locate position and the size of former target at present frame, finally the model parameter of target is upgraded, ensured the continuity of following the tracks of.The benefit that adopts this recognition methods is target following registration, and tracking sustainability is strong, can overcome the partial occlusion between target, and change of background and illumination variation are also had to good adaptability.

To sum up, the invention solves current video camera and under motion conditions, multiple moving targets are detected and the problem of following the tracks of in real time, and ensured real-time and reliability.Solving in image projection transformation matrix, adopt the Corner Feature coupling of image stabilization not only can greatly reduce calculated amount, and ensured the precision of Image Feature Matching simultaneously, thus estimate rapidly and accurately projective transformation matrix, finally ensure the precision of moving object detection; Adopt angle point and extreme point as target signature, make target identify robust more, tracking sustainability is strong, and partial occlusion, the attitude that can overcome between target change simultaneously, to change of background and illumination variation adaptability preferably; The method that adopts target travel to estimate has been dwindled the scope of target search, has reduced calculated amount, complicacy while greatly reducing target following.

Brief description of the drawings

Fig. 1 is video monitoring system structural drawing in the embodiment of the present invention.

Fig. 2 is moving target detecting method process flow diagram in the embodiment of the present invention.

Fig. 3 is the method for building up process flow diagram of moving target model in the embodiment of the present invention.

Fig. 4 is motion target tracking method process flow diagram in the embodiment of the present invention.

Fig. 5 is moving target two value model schematic diagram in the embodiment of the present invention.

Embodiment

In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.

Refer to Fig. 1, the video monitoring system that the present embodiment provides mainly comprises: vehicle-mounted vidicon 11, DSP12, moving object detection module 13 and motion target tracking module 14; Wherein, DSP12 processes the video image that vehicle-mounted vidicon 11 gathers, call moving object detection module 13 and extract moving target, if there is moving target to exist, call motion target tracking module 14 moving target is followed the tracks of, concrete processing procedure comprises: the step of moving object detection, set up the step of moving target model, the step of motion target tracking.Wherein,

(1) as shown in Figure 2, the step of moving object detection specifically comprises:

Step 101, collection two continuous frames image f (t-1), f (t) also carries out Gaussian smoothing.

Step 102, calculate above-mentioned 2 two field picture character pair points.

Specifically comprise: 1. extract two width image Harris Corner Features and set up HOG descriptor for each angle point, forming Corner Feature description vectors (x, y, d), wherein x, y is angle point transverse and longitudinal coordinate in image, d is HOG descriptor;

2. corners Matching: establish the Corner Feature vector set that image f (t-1) produces

Desc (t-1)={ (x 1, y 1, d 1), (x 2, y 2, d 2) ..., (x m, y m, d m), image f (t) produce Corner Feature vector set desc (t)=(x ' 1, y ' 1, d ' 1), (x' 2, y' 2, d' 2) ..., (x' n, y' n, d' n), the condition of corners Matching is:

(x i,y i,d i)→(x' j,y' j,d' j)if|d i-d' j|=arg?min{|d i-d′ 1|,|d i-d' 2|,...,|d i-d' n|}

Finally obtain corners Matching S set={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., m}.

3. on the basis of S set, utilize RANSAC algorithm filtering Mismatching point obtain S set ';

S'={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., n} and n≤m

Step 103, according to step 102 gained corresponding angles point set S', utilize the broad sense method of inverting to calculate projection matrix A:

Order Y = x 1 x 2 . . . x k y 1 y 2 . . . y k 1 1 . . . 1 , X = x 1 ′ x 2 ′ . . . x k ′ y 1 ′ y 2 ′ . . . y k ′ 1 1 . . . 1 , Y=AX, A=YX'(XX') -1

Step 104, former frame image f (t-1) is obtained to two field picture F do projective transformation, converts as follows:

u v 1 = A x y 1 ,

Wherein, (u, v) is the pixel coordinate in image F, and (x, y) is image f (t-1) pixel coordinate;

Step 105, two field picture F and a rear two field picture f (t) are made to poor difference image D, the i.e. D=|F-f (t) of obtaining |, utilize threshold segmentation method to obtain bianry image B;

B ( i , j ) = 1 D ( i , j ) > T 0 0 , T 0 ∈ [ 10,20 ]

Step 106, bianry image B is carried out to filtering operation, specifically comprises:

1. bianry image B is carried out to burn into expansive working, remove interference noise point and also remove cavity;

2. on bianry image B, extract moving target and calculate barycenter, length and width, target circularity, the target area of target, as shown in Figure 5: the center that the barycenter of target is white portion,

target area=long * is wide

And all targets are joined to moving target chained list objList={a 1, a 2, a 3..., a n;

3. the empirical condition existing according to real goal is removed pseudo-target, obtains final moving target chained list reliably; Empirical condition is as follows: (i) target length breadth ratio is in interval [0.2,5.0]; (ii) target circularity is greater than 0.3; (iii) target area is greater than 200.

(2) set up the step of moving target model, as shown in Figure 3, comprising:

Step 201, cyclic access moving target chained list, the information (initial value of n is 1) of n moving target of extraction;

Step 202, according to the position of moving target and size, it is plucked out from current video image, calculate extreme point and Harris angle point, the calculating of extreme point adopts the method (only need extract in original video image scale size) of David Lowe suggestion, and calculates the HOG descriptor of each unique point;

Step 203, set up moving target model and preserve: model information comprises coordinate and the HOG descriptor of position, size, direction of motion, displacement, Corner and the extreme point of target in image;

Object module is defined as follows: a={h, w, area, d, desc, track}

Wherein, h represents target width, and w represents target length, and area represents target area, and d represents target circularity, and desc is the current HOG set of eigenvectors of target, and track is target trajectory set.

Step 204, n=n+1, if n is not more than target chained list length, go to step 201, otherwise go to step (3).

(3) step of motion target tracking, as shown in Figure 4, comprising:

Step 301, cyclic access moving target chained list, obtain j moving target model information a j;

Step 302, estimate position, the size of j moving target at present frame according to step (2) gained moving target model information, and it is plucked out from present frame obtain estimating target image; This step specifically comprises:

1. define track={ (x i, y i) | i=1,2,3 ... n} is target trajectory set, possible position (the x' of estimating target in present frame i, y' i) be

x' i=x i-1+Δx,y' i=y i-1+Δy

Width h'=h+k Δ y, length w'=w+k Δ x

Wherein, Δx = 0 i ≤ 2 x i - 1 - x i - 2 else , Δy = 0 i ≤ 2 y i - 1 - y i - 2 else , K is target zoom factor, and k >=2 are got k=3 herein.

2. according to 1. estimated parameter (x'i, y'i) of step, h', w' plucks out target and obtains target image f from present image.

Step 303, calculate extreme point and the angle point of target image f, and calculate corresponding HOG descriptor and obtain estimating target vector a' j;

Step 304, target identification: coupling estimating target vector a' jwith former object vector a jif the match is successful, show target following success, upgrade moving target model information a j; If it fails to match, follow the tracks of unsuccessfully, moving target model information is deleted from chained list.

1. utilize the coupling of method described in step 102 a ' jand a jin Corner Feature, if the match is successful, carry out lower step, otherwise delete the model information a of j moving target j.

2. utilize method described in step 103 to calculate target a jprojection matrix A a, recycling step 104 described in method can try to achieve target a jcenter-of-mass coordinate in present image, length and width, angle point information;

3. utilize step 2. required parameter upgrade the model information a of j moving target j;

Step 305, j=j+1, if j is less than or equal to target chained list length, go to step 301, otherwise finish.

Correspondingly, corresponding with above-mentioned moving object detection and tracking method, the embodiment of the present invention also provides a kind of moving object detection and tracking system, and this system comprises moving object detection module, moving target model building module and motion target tracking module, wherein:

Moving object detection module: for two two field picture f (t-1), f (t) to continuous acquisition and carry out Gaussian smoothing, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtain character pair point set S'; According to the S set of character pair point ', utilize generalized inverse method to try to achieve projective transformation matrix A; Former frame image f (t-1) is obtained to image F according to projective transformation matrix do projective transformation; Two field picture and a rear two field picture f (t) are done poor to obtain difference image D, then passing threshold is cut apart and is obtained bianry image B; Bianry image B is carried out to filtering operation and obtain all moving targets;

Wherein, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtaining character pair point set S' specifically comprises: extract respectively the Harris Corner Feature of image f (t-1) and f (t) and set up HOG descriptor for each Harris angle point, form Corner Feature description vectors (x, y, d); Wherein, x and y are respectively horizontal ordinate and the ordinate of Harris angle point in image, and d is HOG descriptor; The Corner Feature vector set of image f (t-1) and f (t) is carried out to matching treatment, obtain corners Matching S set={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., m}; Utilize the Mismatching point in RANSAC algorithm filtering corners Matching S set to obtain character pair point set S', wherein,

S'={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., n} and n≤m;

Set up moving target model module: for for each moving target, calculate position and the size of this moving target, from current frame image, pluck out accordingly target image, calculate the HOG descriptor of this target image extreme point, Harris angle point and each unique point; The corresponding moving target information of obtaining according to detection process of moving target is set up moving target model, and the mode that adopts position, size, direction of motion, displacement, Harris angle point, extreme point and the HOG descriptor of moving target in image to combine in this model represents moving target feature; Wherein, moving target model definition is a={h, w, area, d, desc, track}, wherein, h represents target width, w represents target length, area represents target area, and d represents target circularity, and desc is the current HOG set of eigenvectors of target, and track is target trajectory set;

Motion target tracking module: for for each moving target, go out position and the size of this moving target in current frame image according to its moving target model assessment, and pluck out respective image as estimating target image f from current frame image; The eigenwert of calculating estimating target image f, comprises Harris angle point and extreme point and HOG descriptor thereof, obtains the characteristic information of estimating target image; The feature of estimating target image and former target image is mated, if the match is successful, upgrade the information of this moving target model, otherwise delete the information of this moving target model.

The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.

Claims (6)

1. a moving object detection and tracking method, is applied in the video monitoring system of video camera under being kept in motion, and it is characterized in that, comprising:
The step of moving object detection: two two field picture f (t-1), the f (t) to continuous acquisition also carries out Gaussian smoothing, adopt Harris angle point to extract described image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtain character pair point set S'; According to the S set of described character pair point ', utilize generalized inverse method to try to achieve projective transformation matrix A; Former frame image f (t-1) is obtained to two field picture F according to described projective transformation matrix do projective transformation; Two field picture F and a rear two field picture f (t) are done poor to obtain difference image D, then passing threshold is cut apart and is obtained bianry image B; Bianry image B is carried out to filtering operation and obtain all moving targets;
Wherein, described employing Harris angle point extracts described image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtains character pair point set S' and further comprises:
A21, extract respectively described image f (t-1) and f (t) Harris Corner Feature and set up HOG descriptor for each Harris angle point, formation Corner Feature description vectors (x, y, d); Wherein, x and y are respectively horizontal ordinate and the ordinate of Harris angle point in image, and d is HOG descriptor;
A22, the Corner Feature vector set of described image f (t-1) and f (t) is carried out to matching treatment, obtain corners Matching S set={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., m};
A23, utilize Mismatching point in RANSAC algorithm filtering corners Matching S set to obtain character pair point set S', wherein,
S'={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., n} and n≤m;
Set up the step of moving target model: for each moving target, calculate position and the size of this moving target, from current frame image, pluck out accordingly target image, calculate the HOG descriptor of this target image extreme point, Harris angle point and each unique point; The corresponding moving target information of obtaining according to described detection process of moving target is set up moving target model, and the mode that adopts position, size, direction of motion, displacement, Harris angle point, extreme point and the HOG descriptor of moving target in image to combine in this model represents moving target feature; Wherein, described moving target model definition is a={h, w, area, d, desc, track}, wherein, h represents target width, w represents target length, area represents target area, and d represents target circularity, and desc is the current HOG set of eigenvectors of target, and track is target trajectory set;
The step of motion target tracking: for each moving target, go out position and the size of this moving target in current frame image according to its moving target model assessment, and pluck out respective image as estimating target image f from current frame image; The eigenwert of calculating estimating target image f, comprises Harris angle point and extreme point and HOG descriptor thereof, obtains the characteristic information of estimating target image; The feature of estimating target image and former target image is mated, if the match is successful, upgrade the information of this moving target model, otherwise delete the information of this moving target model.
2. moving object detection and tracking method as claimed in claim 1, is characterized in that, in described step a22, the condition of corners Matching is: for the arbitrary angle point (x in described image f (t-1) i, y i, d i) with described image f (t) in arbitrary angle point (x ' j, y ' j, d ' j), if | d i-d' j|=arg min{|d i-d ' 1|, | d i-d' 2| ..., | d i-d' n|, judge described angle point (x i, y i, d i) and angle point (x' j, y' j, d' j) match.
3. moving object detection and tracking method as claimed in claim 1, is characterized in that, describedly bianry image B is carried out to filtering operation obtains all moving targets and further comprises:
A61, bianry image B is carried out to burn into expansive working, take out interference noise point and cavity;
A62, on bianry image B, extract all moving targets, calculate respectively barycenter, length and width, target circularity, the target area information of each moving target and all moving target information is saved to moving target chained list.
4. moving object detection and tracking method as claimed in claim 3, is characterized in that, after described step a62, also comprises:
A63, remove the pseudo-target in described moving target chained list.
5. moving object detection and tracking method as claimed in claim 1, it is characterized in that, describedly estimate position and the size of this moving target at current frame image according to moving target model information, and it plucked out from current frame image obtain estimating target image f and further comprise:
C11, position (x' according to following formula estimating motion target in current frame image i, y' i), length w' and width h',
x' i=x i-1+Δx,y' i=y i-1+Δy,h'=h+k·Δy,w'=w+k·Δx,
Wherein, Δx = 0 i ≤ 2 x i - 1 - x i - 2 else , Δy = 0 i ≤ 2 y i - 1 - y i - 2 else , K is target zoom factor, k>=2; (x i-1, y i-1) and (x i-2, y i-2) being respectively the position of moving target in front cross frame image, h and w are respectively length and the width of moving target in former frame image;
C12, according to parameter (x' i, y' i), h', w' pluck out moving target and obtain estimating target image f from current frame image.
6. a moving object detection and tracking system, is applied in the video monitoring system of video camera under being kept in motion, and it is characterized in that, this system comprises moving object detection module, moving target model building module and motion target tracking module, wherein:
Moving object detection module: for two two field picture f (t-1), f (t) to continuous acquisition and carry out Gaussian smoothing, adopt Harris angle point to extract described image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtain character pair point set S'; According to the S set of described character pair point ', utilize generalized inverse method to try to achieve projective transformation matrix A; Former frame image f (t-1) is obtained to image F according to described projective transformation matrix do projective transformation; Two field picture and a rear two field picture f (t) are done poor to obtain difference image D, then passing threshold is cut apart and is obtained bianry image B; Bianry image B is carried out to filtering operation and obtain all moving targets;
Wherein, described employing Harris angle point extracts described image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtaining character pair point set S' specifically comprises: extract respectively the Harris Corner Feature of described image f (t-1) and f (t) and set up HOG descriptor for each Harris angle point, form Corner Feature description vectors (x, y, d); Wherein, x and y are respectively horizontal ordinate and the ordinate of Harris angle point in image, and d is HOG descriptor; The Corner Feature vector set of described image f (t-1) and f (t) is carried out to matching treatment, obtain corners Matching S set={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., m}; Utilize the Mismatching point in RANSAC algorithm filtering corners Matching S set to obtain character pair point set S', wherein,
S'={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., n} and n≤m;
Set up moving target model module: for for each moving target, calculate position and the size of this moving target, from current frame image, pluck out accordingly target image, calculate the HOG descriptor of this target image extreme point, Harris angle point and each unique point; The corresponding moving target information of obtaining according to described detection process of moving target is set up moving target model, and the mode that adopts position, size, direction of motion, displacement, Harris angle point, extreme point and the HOG descriptor of moving target in image to combine in this model represents moving target feature; Wherein, described moving target model definition is a={h, w, area, d, desc, track}, wherein, h represents target width, w represents target length, area represents target area, and d represents target circularity, and desc is the current HOG set of eigenvectors of target, and track is target trajectory set;
Motion target tracking module: for for each moving target, go out position and the size of this moving target in current frame image according to its moving target model assessment, and pluck out respective image as estimating target image f from current frame image; The eigenwert of calculating estimating target image f, comprises Harris angle point and extreme point and HOG descriptor thereof, obtains the characteristic information of estimating target image; The feature of estimating target image and former target image is mated, if the match is successful, upgrade the information of this moving target model, otherwise delete the information of this moving target model.
CN201010515055.8A 2010-10-22 2010-10-22 Video monitoring system and moving target detecting and tracking method thereof CN102456225B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010515055.8A CN102456225B (en) 2010-10-22 2010-10-22 Video monitoring system and moving target detecting and tracking method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010515055.8A CN102456225B (en) 2010-10-22 2010-10-22 Video monitoring system and moving target detecting and tracking method thereof

Publications (2)

Publication Number Publication Date
CN102456225A CN102456225A (en) 2012-05-16
CN102456225B true CN102456225B (en) 2014-07-09

Family

ID=46039388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010515055.8A CN102456225B (en) 2010-10-22 2010-10-22 Video monitoring system and moving target detecting and tracking method thereof

Country Status (1)

Country Link
CN (1) CN102456225B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103200358B (en) * 2012-01-06 2016-04-13 杭州普维光电技术有限公司 Coordinate transformation method between video camera and target scene and device
CN102799883B (en) * 2012-06-29 2015-07-22 广州中国科学院先进技术研究所 Method and device for extracting movement target from video image
CN103824305A (en) * 2014-03-17 2014-05-28 天津工业大学 Improved Meanshift target tracking method
CN103996028B (en) * 2014-05-21 2017-04-05 南京航空航天大学 A kind of vehicle Activity recognition method
CN104504724B (en) * 2015-01-15 2018-04-06 杭州国策商图科技有限公司 A kind of moving body track and extraction algorithm not influenceed by barrier
CN105025198B (en) * 2015-07-22 2019-01-01 东方网力科技股份有限公司 A kind of group technology of the video frequency motion target based on Spatio-temporal factors
CN106534614A (en) * 2015-09-10 2017-03-22 南京理工大学 Rapid movement compensation method of moving target detection under mobile camera
CN105321180A (en) * 2015-10-21 2016-02-10 浪潮(北京)电子信息产业有限公司 Target tracking and positioning method and apparatus based on cloud computing
CN105427344B (en) * 2015-11-18 2018-04-03 国网江苏省电力有限公司检修分公司 Moving target detecting method in a kind of substation intelligence system
CN107105193B (en) * 2016-02-23 2020-03-20 芋头科技(杭州)有限公司 Robot monitoring system based on human body information
CN105933698A (en) * 2016-04-14 2016-09-07 吴本刚 Intelligent satellite digital TV program play quality detection system
CN106815856B (en) * 2017-01-13 2019-07-16 大连理工大学 A kind of moving-target Robust Detection Method under area array camera rotary scanning
CN107481269B (en) * 2017-08-08 2020-07-03 西安科技大学 Multi-camera moving object continuous tracking method for mine
CN107704797B (en) * 2017-08-08 2020-06-23 深圳市安软慧视科技有限公司 Real-time detection method, system and equipment based on pedestrians and vehicles in security video
US20200074678A1 (en) * 2018-08-28 2020-03-05 Beijing Jingdong Shangke Information Technology Co., Ltd. Device and method of tracking poses of multiple objects based on single-object pose estimator
CN110188754A (en) * 2019-05-29 2019-08-30 腾讯科技(深圳)有限公司 Image partition method and device, model training method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101009021A (en) * 2007-01-25 2007-08-01 复旦大学 Video stabilizing method based on matching and tracking of characteristic
CN101109818A (en) * 2006-07-20 2008-01-23 中国科学院自动化研究所 Method for automatically selecting remote sensing image high-precision control point

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101109818A (en) * 2006-07-20 2008-01-23 中国科学院自动化研究所 Method for automatically selecting remote sensing image high-precision control point
CN101009021A (en) * 2007-01-25 2007-08-01 复旦大学 Video stabilizing method based on matching and tracking of characteristic

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
应用角点匹配实现目标跟踪;罗刚等;《中国光学与应用光学》;20091231;第2卷(第06期);1-4 *
罗刚等.应用角点匹配实现目标跟踪.《中国光学与应用光学》.2009,第2卷(第06期),1-4.

Also Published As

Publication number Publication date
CN102456225A (en) 2012-05-16

Similar Documents

Publication Publication Date Title
CN107025668B (en) Design method of visual odometer based on depth camera
Hu et al. Moving object detection and tracking from video captured by moving camera
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
US9269012B2 (en) Multi-tracker object tracking
Yilmaz et al. Contour-based object tracking with occlusion handling in video acquired using mobile cameras
EP2710554B1 (en) Head pose estimation using rgbd camera
Brox et al. Large displacement optical flow
US8368766B2 (en) Video stabilizing method and system using dual-camera system
Senior et al. Appearance models for occlusion handling
US7376246B2 (en) Subspace projection based non-rigid object tracking with particle filters
Zhu et al. Object tracking in structured environments for video surveillance applications
Cohen et al. Detecting and tracking moving objects for video surveillance
US8958600B2 (en) Monocular 3D pose estimation and tracking by detection
Zhao et al. Segmentation and tracking of multiple humans in complex situations
Wu et al. Real-time human detection using contour cues
CN105245841B (en) A kind of panoramic video monitoring system based on CUDA
Li et al. Saliency model-based face segmentation and tracking in head-and-shoulder video sequences
US7831094B2 (en) Simultaneous localization and mapping using multiple view feature descriptors
Kondori et al. 3D head pose estimation using the Kinect
Paragios et al. Geodesic active regions for motion estimation and tracking
Venkatesh et al. Efficient object-based video inpainting
KR101733131B1 (en) 3D motion recognition method and apparatus
Shi et al. Real-time tracking using level sets
Bleiweiss et al. Fusing time-of-flight depth and color for real-time segmentation and tracking
Kundu et al. Moving object detection by multi-view geometric techniques from a single camera mounted robot

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
C14 Grant of patent or utility model