CN104156982A - Moving object tracking method and device - Google Patents

Moving object tracking method and device Download PDF

Info

Publication number
CN104156982A
CN104156982A CN201410373576.2A CN201410373576A CN104156982A CN 104156982 A CN104156982 A CN 104156982A CN 201410373576 A CN201410373576 A CN 201410373576A CN 104156982 A CN104156982 A CN 104156982A
Authority
CN
China
Prior art keywords
motion mask
moving target
motion
mentioned
reference point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410373576.2A
Other languages
Chinese (zh)
Other versions
CN104156982B (en
Inventor
陈中欣
袁誉乐
赵勇
谭兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201410373576.2A priority Critical patent/CN104156982B/en
Publication of CN104156982A publication Critical patent/CN104156982A/en
Application granted granted Critical
Publication of CN104156982B publication Critical patent/CN104156982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the invention provides a moving object tracking method and device. The moving object tracking method can comprise the following steps: obtaining an image from a video sequence; obtaining the foreground picture of the image; performing morphological processing on the foreground picture; performing connected region fetch operation on the foreground picture, which is subjected to morphological processing so as to obtain a moving object included by the foreground picture; determining the motion track of the moving object on the basis of a moving template corresponding to the moving object. Through the adoption of the technical scheme of the invention, the computation complexity of tracking the moving object is reduced.

Description

Motion target tracking method and device
Technical field
The present invention relates to technical field of image processing, be specifically related to motion target tracking method and device.
Background technology
Intelligent video monitoring relates to Computer Image Processing technically, video image is processed, the numerous areas such as pattern-recognition and artificial intelligence, thereby there is stronger researching value, and its application is very extensive, as the monitoring in coml hotel, community, building, market; The monitoring of medical treatment, airport, station, traffic scene in government utility; Weapon standard based on video in military affairs is taken aim at system etc.
The core of intelligent video monitoring is moving object detection and tracking method.The quality of evaluation objective tracker is real-time and two indexs of accuracy, detects and the moving target of following the tracks of under complex scene, and be the Major Difficulties in motion target tracking.
Existing motion target tracking method mainly contains optical flow method and grain filter method etc., yet existing motion target tracking method substantially all needs each pixel of each image in difference analytical calculation video sequence, and this makes the computation complexity of prior art very high.
Summary of the invention
The embodiment of the present invention provides motion target tracking method and device, to reducing the computation complexity of pursuit movement target.
First aspect present invention provides a kind of motion target tracking method, can comprise:
From video sequence, obtain image;
Obtain the foreground picture of described image;
Described foreground picture is carried out to morphology processing;
The described foreground picture carrying out after morphology processing is carried out to the moving target that connected region extraction operation comprises to obtain described foreground picture; Based on Motion mask corresponding to described moving target, determine the movement locus of described moving target.
In conjunction with first aspect, in the possible embodiment of the first of first aspect, describedly described foreground picture is carried out to morphology process and to comprise: described foreground picture is carried out to following at least one morphology and process: filtering processing, dilation erosion are processed, opening operation is processed and closed operation is processed.
In conjunction with the possible embodiment of the first of first aspect or first aspect, in the possible embodiment of the second of first aspect, the described movement locus of determining described moving target based on Motion mask corresponding to described moving target, comprising:
Calculate the Euclidean distance between each concentrated Motion mask of described moving target and Motion mask;
Described Euclidean distance between each Motion mask of concentrating based on the described described moving target calculating and Motion mask, determines the matching degree between each Motion mask among described moving target and described Motion mask collection;
If determine the first Motion mask mating with described moving target based on described matching degree, the reference point coordinate of described moving target is added in the movement locus array in described the first Motion mask, in movement locus array based in described the first Motion mask, the reference point coordinate of current record is determined the movement locus of described moving target, wherein, described the first Motion mask is one of them Motion mask among described Motion mask collection.
In conjunction with the possible embodiment of the second of first aspect, in the third possible embodiment of first aspect, described method also comprises:
If determining described moving target any one Motion mask concentrated with described Motion mask based on described matching degree does not all mate, generate the first Motion mask that described moving target is corresponding, described the first Motion mask is added among described Motion mask collection, wherein, in the movement locus array of described the first Motion mask, recorded the reference point coordinate of described moving target.
In the third possible embodiment in conjunction with the possible embodiment of the second of first aspect or first aspect, in the 4th kind of possible embodiment of first aspect, Euclidean distance between each Motion mask that the described moving target of described calculating and Motion mask are concentrated, comprising:
Utilize Motion mask collection and described moving target structure Distance matrix D ist;
Dist = D 1 D 2 . . . D m ;
Wherein, the line number m of described distance matrix equals the number of the current Motion mask comprising of described Motion mask collection; Element D in described Dist irepresent the Euclidean distance between the concentrated Motion mask i of described moving target and described Motion mask.
In conjunction with the 4th kind of possible embodiment of first aspect, in the 5th kind of possible embodiment of first aspect, described D iby following formula, calculate,
D i = ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 , if ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 < T T , other
Wherein, described T is first threshold;
Described (x j, y j) be the reference point coordinate of described moving target;
Described (x i, y i) be the reference point coordinate of the concentrated Motion mask i of described Motion mask;
Described (dx, dy) predictive displacement for recording in described Motion mask i.
In conjunction with the 4th kind of possible embodiment of first aspect or the 5th kind of possible embodiment of first aspect, in the 6th kind of possible embodiment of first aspect, described Euclidean distance between each Motion mask that described described moving target based on calculating and Motion mask are concentrated, determine the matching degree between each concentrated Motion mask of described moving target and Motion mask, comprising:
Based on described distance matrix structure coupling matrix M atch;
Match = M 1 M 2 . . . M m
Wherein, the line number m of described coupling matrix equals the number of the current Motion mask comprising of described Motion mask collection; Element M in described Match irepresent the matching degree between the concentrated Motion mask i of described moving target and described Motion mask, wherein, matching degree between the concentrated Motion mask i of described moving target and described Motion mask, determines based on Euclidean distance between described moving target and the concentrated Motion mask i of described Motion mask.
In conjunction with the 6th kind of possible embodiment of first aspect, in the 7th kind of possible embodiment of first aspect,
Described M iby following formula, calculate,
In conjunction with the 6th kind of possible embodiment of the 5th kind of possible embodiment of the 4th kind of possible embodiment of the third possible embodiment of the possible embodiment of the second of first aspect or first aspect or first aspect or first aspect or first aspect or the 7th kind of possible embodiment of first aspect, in the 8th kind of possible embodiment of first aspect, after determining based on described matching degree the first Motion mask mating with described moving target if described, also comprise: to the predictive displacement (dx comprising in described the first Motion mask, dy) upgrade as follows:
dx = x i 1 - x i dy = y i 1 - y i , (x wherein i1, y i1) be the reference point coordinate of described moving target, (x i, y i) be the reference point coordinate of movement locus array up-to-date interpolation before the reference point coordinate that adds described moving target of described the first Motion mask.
In conjunction with the 7th kind of possible embodiment of the 6th kind of possible embodiment of the 5th kind of possible embodiment of the 4th kind of possible embodiment of the third possible embodiment of the possible embodiment of the second of first aspect or first aspect or first aspect or first aspect or first aspect or first aspect or the 8th kind of possible embodiment of first aspect, in the 9th kind of possible embodiment of first aspect, described based on described matching degree, determine the first Motion mask mating with described moving target after
Described method also comprises:
The degree of confidence Threshold_In1 that described the first Motion mask recording in described the first Motion mask is become to motion tracking template adds s1;
Concentrate the degree of confidence Threshold_In1 that becomes motion tracking template recording in other Motion masks except described the first Motion mask to subtract s1 described Motion mask, the degree of confidence Threshold_In2 that the Motion mask recording in described other Motion masks disappears subtracts s2, wherein, described s1 is positive integer, and described s2 is positive integer.
In conjunction with the 8th kind of possible embodiment of the 7th kind of possible embodiment of the 6th kind of possible embodiment of the 5th kind of possible embodiment of the 4th kind of possible embodiment of the third possible embodiment of the possible embodiment of the second of first aspect or first aspect or first aspect or first aspect or first aspect or first aspect or first aspect or the 9th kind of possible embodiment of first aspect, in the tenth kind of possible embodiment of first aspect, in the described movement locus array based in described the first Motion mask, the reference point coordinate of current record is determined the movement locus of described moving target, comprise:
Calculate the direction gradient between adjacent two reference point coordinates in P1 the reference point coordinate recording the earliest in the movement locus array in described the first Motion mask, to obtain P1-1 direction gradient, calculate the angle of the adjacent direction gradient in a described P1-1 direction gradient, to obtain P1-2 angle, described P1 is greater than 2 positive integer;
Calculate the direction gradient between adjacent two reference point coordinates in P2 the reference point coordinate recording the latest in the movement locus array in described the first Motion mask, to obtain P2-1 direction gradient, calculate the angle of the adjacent direction gradient in a described P2-1 direction gradient, to obtain P2-2 angle, described P2 is greater than 2 positive integer;
If be greater than the angle quantity of the first angle threshold value in a described P1-2 angle, be greater than P3, if and the angle quantity that is greater than the second angle threshold value in a described P2-2 angle is greater than P4, and the area of the corresponding movement locus of the reference point coordinate region of the record of the movement locus array in described the first Motion mask is greater than the first area threshold, utilize the reference point coordinate of current record in the movement locus array in described the first Motion mask to draw the movement locus that obtains described moving target.
In conjunction with the 9th kind of possible embodiment of the 8th kind of possible embodiment of the 7th kind of possible embodiment of the 6th kind of possible embodiment of the 5th kind of possible embodiment of the 4th kind of possible embodiment of the third possible embodiment of the possible embodiment of the second of first aspect or first aspect or first aspect or first aspect or first aspect or first aspect or first aspect or first aspect or the tenth kind of possible embodiment of first aspect, in the 11 kind of possible embodiment of first aspect, point coordinate or center of mass point coordinate centered by described reference point coordinate.
Second aspect present invention provides a kind of motion target tracking device, comprising:
Acquiring unit, for obtaining image from video sequence;
Obtain unit, for obtaining the foreground picture of described image;
Processing unit, for carrying out morphology processing to described foreground picture;
Extraction unit, for carrying out to the described foreground picture carrying out after morphology processing the moving target that connected region extraction operation comprises to obtain described foreground picture;
Tracking treatment unit, for determining the movement locus of described moving target based on Motion mask corresponding to described moving target.
In conjunction with second aspect, in the possible embodiment of the first of second aspect,
Describedly described foreground picture is carried out to morphology process and to comprise: described foreground picture is carried out to following at least one morphology and process: filtering processing, dilation erosion are processed, opening operation is processed and closed operation is processed.
In conjunction with the possible embodiment of the first of second aspect or second aspect, in the possible embodiment of the second of second aspect,
Described tracking treatment unit specifically for, calculate the Euclidean distance between each Motion mask that described moving target and Motion mask concentrate; Described Euclidean distance between each Motion mask of concentrating based on the described described moving target calculating and Motion mask, determines the matching degree between each Motion mask among described moving target and described Motion mask collection; If determine the first Motion mask mating with described moving target based on described matching degree, the reference point coordinate of described moving target is added in the movement locus array in described the first Motion mask, in movement locus array based in described the first Motion mask, the reference point coordinate of current record is determined the movement locus of described moving target, wherein, described the first Motion mask is one of them Motion mask among described Motion mask collection.
In conjunction with the possible embodiment of the second of second aspect, in the third possible embodiment of second aspect,
Described tracking treatment unit also for, if determining described moving target any one Motion mask concentrated with described Motion mask based on described matching degree does not all mate, generate the first Motion mask that described moving target is corresponding, described the first Motion mask is added among described Motion mask collection, wherein, in the movement locus array of described the first Motion mask, recorded the reference point coordinate of described moving target.
In the third possible embodiment in conjunction with the possible embodiment of the second of second aspect or second aspect, in the 4th kind of possible embodiment of second aspect,
Aspect Euclidean distance between each Motion mask of concentrating at the described moving target of described calculating and Motion mask, described tracking treatment unit specifically for, utilize Motion mask collection and described moving target structure Distance matrix D ist;
Dist = D 1 D 2 . . . D m ;
Wherein, the line number m of described distance matrix equals the number of the current Motion mask comprising of described Motion mask collection; Element D in described Dist irepresent the Euclidean distance between the concentrated Motion mask i of described moving target and described Motion mask.
In conjunction with the 4th kind of possible embodiment of second aspect, in the 5th kind of possible embodiment of second aspect, described D iby following formula, calculate,
D i = ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 , if ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 < T T , other
Wherein, described T is first threshold;
Described (x j, y j) be the reference point coordinate of described moving target;
Described (x i, y i) be the reference point coordinate of the concentrated Motion mask i of described Motion mask;
Described (dx, dy) predictive displacement for recording in described Motion mask i.
In conjunction with the 4th kind of possible embodiment of second aspect or the 5th kind of possible embodiment of second aspect, in the 6th kind of possible embodiment of second aspect, described Euclidean distance between each Motion mask of concentrating at described described moving target based on calculating and Motion mask, determine the aspect of the matching degree between each concentrated Motion mask of described moving target and Motion mask, described tracking treatment unit specifically for, based on described distance matrix structure coupling matrix M atch;
Match = M 1 M 2 . . . M m
Wherein, the line number m of described coupling matrix equals the number of the current Motion mask comprising of described Motion mask collection; Element M in described Match irepresent the matching degree between the concentrated Motion mask i of described moving target and described Motion mask, wherein, matching degree between the concentrated Motion mask i of described moving target and described Motion mask, determines based on Euclidean distance between described moving target and the concentrated Motion mask i of described Motion mask.
In conjunction with the 6th kind of possible embodiment of second aspect, in the 7th kind of possible embodiment of second aspect,
Described M iby following formula, calculate,
In conjunction with the 6th kind of possible embodiment of the 5th kind of possible embodiment of the 4th kind of possible embodiment of the third possible embodiment of the possible embodiment of the second of second aspect or second aspect or second aspect or second aspect or second aspect or the 7th kind of possible embodiment of second aspect, in the 8th kind of possible embodiment of second aspect
After determining based on described matching degree the first Motion mask mating with described moving target if described, described tracking treatment unit also for, the predictive displacement (dx, dy) comprising in described the first Motion mask is upgraded as follows:
dx = x i 1 - x i dy = y i 1 - y i , (x wherein i1, y i1) be the reference point coordinate of described moving target, (x i, y i) be the reference point coordinate of movement locus array up-to-date interpolation before the reference point coordinate that adds described moving target of described the first Motion mask.
In conjunction with the 7th kind of possible embodiment of the 6th kind of possible embodiment of the 5th kind of possible embodiment of the 4th kind of possible embodiment of the third possible embodiment of the possible embodiment of the second of second aspect or second aspect or second aspect or second aspect or second aspect or second aspect or the 8th kind of possible embodiment of second aspect, in the 9th kind of possible embodiment of second aspect, described based on described matching degree, determine the first Motion mask mating with described moving target after, described tracking treatment unit also for, the degree of confidence Threshold_In1 that described the first Motion mask recording in described the first Motion mask is become to motion tracking template adds s1,
Concentrate the degree of confidence Threshold_In1 that becomes motion tracking template recording in other Motion masks except described the first Motion mask to subtract s1 described Motion mask, the degree of confidence Threshold_In2 that the Motion mask recording in described other Motion masks disappears subtracts s2, wherein, described s1 is positive integer, and described s2 is positive integer.
In conjunction with the 8th kind of possible embodiment of the 7th kind of possible embodiment of the 6th kind of possible embodiment of the 5th kind of possible embodiment of the 4th kind of possible embodiment of the third possible embodiment of the possible embodiment of the second of second aspect or second aspect or second aspect or second aspect or second aspect or second aspect or second aspect or the 9th kind of possible embodiment of second aspect, in the tenth kind of possible embodiment of second aspect, in the described movement locus array based in described the first Motion mask the reference point coordinate of current record determine described moving target movement locus aspect, described tracking treatment unit specifically for,
Calculate the direction gradient between adjacent two reference point coordinates in P1 the reference point coordinate recording the earliest in the movement locus array in described the first Motion mask, to obtain P1-1 direction gradient, calculate the angle of the adjacent direction gradient in a described P1-1 direction gradient, to obtain P1-2 angle, described P1 is greater than 2 positive integer;
Calculate the direction gradient between adjacent two reference point coordinates in P2 the reference point coordinate recording the latest in the movement locus array in described the first Motion mask, to obtain P2-1 direction gradient, calculate the angle of the adjacent direction gradient in a described P2-1 direction gradient, to obtain P2-2 angle, described P2 is greater than 2 positive integer;
If be greater than the angle quantity of the first angle threshold value in a described P1-2 angle, be greater than P3, if and the angle quantity that is greater than the second angle threshold value in a described P2-2 angle is greater than P4, and the area of the corresponding movement locus of the reference point coordinate region of the record of the movement locus array in described the first Motion mask is greater than the first area threshold, utilize the reference point coordinate of current record in the movement locus array in described the first Motion mask to draw the movement locus that obtains described moving target.
In conjunction with the 9th kind of possible embodiment of the 8th kind of possible embodiment of the 7th kind of possible embodiment of the 6th kind of possible embodiment of the 5th kind of possible embodiment of the 4th kind of possible embodiment of the third possible embodiment of the possible embodiment of the second of second aspect or second aspect or second aspect or second aspect or second aspect or second aspect or second aspect or second aspect or the tenth kind of possible embodiment of second aspect, in the 11 kind of possible embodiment of second aspect, point coordinate or center of mass point coordinate centered by described reference point coordinate.
Can find out, in the present embodiment, the foreground picture of image carried out after morphology processing; The described foreground picture carrying out after morphology processing is carried out to the moving target that connected region extraction operation comprises to obtain described foreground picture; Based on Motion mask corresponding to described moving target, determine the movement locus of described moving target.Owing to being that such scheme is to take moving target integral body to carry out image tracing as granularity, need to using each pixel with prior art and as granularity, follow the tracks of computing and compare, technique scheme of the present invention is conducive to the computation complexity of larger reduction pursuit movement target.
Term " first " in instructions of the present invention and claims and above-mentioned accompanying drawing, " second ", " the 3rd " and " the 4th " etc. are for distinguishing different objects, rather than for describing particular order.In addition, term " comprises " and " having " and their any distortion, is intended to be to cover not exclusive comprising.Process, method, system, product or the equipment that has for example comprised series of steps or unit is not defined in step or the unit of having listed, but also comprise alternatively step or the unit of not listing, or also comprise alternatively for these processes, method, product or equipment intrinsic other step or unit.
An embodiment of motion target tracking method of the present invention, wherein, a kind of motion target tracking method can comprise: from video sequence, obtain image; Obtain the foreground picture of above-mentioned image; Above-mentioned foreground picture is carried out to morphology processing; The above-mentioned foreground picture carrying out after morphology processing is carried out to the moving target that connected region extraction operation comprises to obtain above-mentioned foreground picture; Based on Motion mask corresponding to above-mentioned moving target, determine the movement locus of above-mentioned moving target.
Referring to Fig. 1, the schematic flow sheet of a kind of motion target tracking method that Fig. 1 provides for one embodiment of the present of invention.As shown in Figure 1, a kind of motion target tracking manner of execution that one embodiment of the present of invention provide can comprise following content:
101, from video sequence, obtain image.
102, obtain the foreground picture of above-mentioned image.
Wherein, the foreground picture of image may comprise one or more moving targets, does not also likely comprise any moving target.The scene that the foreground picture of mainly take in the embodiment of the present invention comprises one or more moving targets describes as example.
103, above-mentioned foreground picture is carried out to morphology processing.
104, the above-mentioned foreground picture carrying out after morphology processing is carried out to the moving target that connected region extraction operation comprises to obtain above-mentioned foreground picture; Based on Motion mask corresponding to above-mentioned moving target, determine the movement locus of above-mentioned moving target.
Can find out, in the present embodiment, the foreground picture of image carried out after morphology processing; The above-mentioned foreground picture carrying out after morphology processing is carried out to the moving target that connected region extraction operation comprises to obtain above-mentioned foreground picture; Based on Motion mask corresponding to above-mentioned moving target, determine the movement locus of above-mentioned moving target.Owing to being that such scheme is that moving target integral body is carried out image tracing for granularity, need each pixel to follow the tracks of computing as granularity with prior art and compare, technique scheme of the present invention is conducive to the computation complexity of larger reduction pursuit movement target.
Optionally, in possible embodiments more of the present invention, the foreground picture that obtains above-mentioned image for example can comprise: obtain the Background of above-mentioned image, obtain the foreground picture of above-mentioned image based on above-mentioned image and above-mentioned background figure.Wherein, Background does not comprise moving target, and Background can be divided into gray scale Background and color background figure.Wherein, foreground picture is by the image of moving target mark, as non-motion target area is all marked as to black, motion target area is all marked as to white.For example background modeling method can adopt mixed Gaussian background modeling algorithm, and reading images from video sequence, utilizes image and mixed Gaussian background modeling algorithm to set up Background.Image and Background are processed acquisition foreground picture by background difference algorithm to image and Background, and the foreground picture of image may comprise one or more moving targets, also may not comprise any moving target.The scene that the foreground picture of mainly take in the embodiment of the present invention comprises one or more moving targets describes as example.
Wherein, in possible embodiments more of the present invention, above-mentioned foreground picture is carried out to morphology and process and can comprise: above-mentioned foreground picture is carried out to following at least one morphology and process: filtering processing, dilation erosion are processed, opening operation is processed and closed operation is processed.
For example, if foreground picture has comprised the larger noise that obtains, can carry out filtering processing and dilation erosion processing to foreground picture.Can first to foreground picture, carry out medium filtering processing to remove noise, then filtering be processed to foreground picture later and carry out first expansion process post-etching processing, to remove the cavity of moving target, and the fragment of moving target be merged.Suppose that medium filtering template is the template of n*n, can utilize medium filtering template to scan foreground picture.Wherein.The preferred value of this n is odd number, and n value is such as can 3,5,7,9,11 or 13 etc.
When in foreground picture corresponding to medium filtering template, the pixel value of relevant position is following,
A = a 11 a 12 . . . a 1 n a 21 a 22 . . . a 2 n . . . . . . . . . . . . a n 1 a n 2 . . . a nn , First can sort to this n*n pixel value, then choose intermediate value replace in foreground picture corresponding to the pixel value of position.The medium filtering template that for example medium filtering template is 3*3, foreground picture to two-value is done medium filtering, to have at least five pixels are foreground points to nine neighborhood territory pixel points, and this point just can be judged as foreground point, and this mode can be removed some independently little noises.
Optionally, dilation and erosion can adopt the template of 3*3, Mask = 255 255 255 255 255 255 255 255 255 , Initial point Wei Gai template center point wherein.Wherein, expansion is that the respective pixel value of template and foreground picture is done and computing, if 0, and the value assignment of the bianry image position that initial point is corresponding is 0, otherwise is 255.Corrosion is that template is done and computing with the pixel value of corresponding bianry image, if 255, and the value assignment of the bianry image position that initial point is corresponding is 255, otherwise is 0.Certainly, under some application scenarios, also can adopt other modes to carry out dilation and erosion operation.
Optionally, in possible embodiments more of the present invention, the movement locus of determining above-mentioned moving target based on Motion mask corresponding to above-mentioned moving target, can comprise: calculate the Euclidean distance between each concentrated Motion mask of above-mentioned moving target and Motion mask, above-mentioned Euclidean distance between each Motion mask that above-mentioned moving target based on calculating and Motion mask are concentrated, determine the matching degree between each concentrated Motion mask of above-mentioned moving target and Motion mask, if determine the first Motion mask mating with above-mentioned moving target based on above-mentioned matching degree, the reference point coordinate of above-mentioned moving target is added in the movement locus array in above-mentioned the first Motion mask, in movement locus array based in above-mentioned the first Motion mask, the reference point coordinate of current record is determined the movement locus of above-mentioned moving target, wherein, above-mentioned the first Motion mask is one of them Motion mask that above-mentioned Motion mask is concentrated.
In addition, above-mentioned motion target tracking method also can further comprise: if determine above-mentioned moving target any one Motion mask concentrated with above-mentioned Motion mask based on above-mentioned matching degree, all do not mate, can generate the first Motion mask that above-mentioned moving target is corresponding, above-mentioned the first Motion mask is added among above-mentioned Motion mask collection, wherein, in the movement locus array of above-mentioned the first Motion mask, recorded the reference point coordinate of above-mentioned moving target.
Optionally, in possible embodiments more of the present invention, calculate the Euclidean distance between each concentrated Motion mask of above-mentioned moving target and Motion mask, can comprise:
Utilize Motion mask collection and above-mentioned moving target structure Distance matrix D ist;
Dist = D 1 D 2 . . . D m ;
Wherein, the line number m of above-mentioned distance matrix equals the number of the current Motion mask comprising of above-mentioned Motion mask collection; Element D in above-mentioned Dist irepresent the Euclidean distance between the concentrated Motion mask i of above-mentioned moving target and above-mentioned Motion mask.
Wherein, above-mentioned D ifor example can calculate by following formula,
D i = ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 , if ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 < T T , other
Wherein, above-mentioned (x i, y i) be the reference point coordinate of the concentrated Motion mask i of above-mentioned Motion mask, above-mentioned (x j, y j) be the reference point coordinate of above-mentioned moving target, above-mentioned (dx, dy) predictive displacement for recording in above-mentioned Motion mask i, above-mentioned T is first threshold, the span of first threshold T for example can be [20,150], concrete example is as can be 20,25,30,35,40,50,120,140 or 150 etc.
Optionally, in possible embodiments more of the present invention, above-mentioned Euclidean distance between each Motion mask that above-mentioned moving target based on calculating and Motion mask are concentrated, determines the matching degree between each concentrated Motion mask of above-mentioned moving target and Motion mask, can comprise:
Based on above-mentioned distance matrix structure coupling matrix M atch;
Match = M 1 M 2 . . . M m
Wherein, the line number m of above-mentioned coupling matrix M atch equals the number of the current Motion mask comprising of above-mentioned Motion mask collection; Element M in above-mentioned coupling matrix M atch irepresent the matching degree between the concentrated Motion mask i of above-mentioned moving target and above-mentioned Motion mask, wherein, matching degree between the concentrated Motion mask i of above-mentioned moving target and above-mentioned Motion mask, determines based on Euclidean distance between above-mentioned moving target and the concentrated Motion mask i of above-mentioned Motion mask.
Optionally, in possible embodiments more of the present invention, above-mentioned M iby following formula, calculate,
Wherein, element M krepresent the matching degree between the concentrated Motion mask k of above-mentioned moving target and above-mentioned Motion mask.
Optionally, in other possible embodiments of the present invention, above-mentioned M ialso by following formula, calculate,
Wherein, a1 can be positive number, and for example a1 can equal 0.2,1,2,3,4.5,8.3 or other values.
Wherein, work as M ibe greater than other elements in coupling matrix M atch (M for example iequal a1), think that the above-mentioned moving target Motion mask i concentrated with above-mentioned Motion mask mates.
Optionally, in possible embodiments more of the present invention, (for example if above-mentioned, based on above-mentioned matching degree, determine after the first Motion mask mating with above-mentioned moving target, matching degree between above-mentioned moving target and the first Motion mask is greater than the matching degree of other Motion masks that above-mentioned moving target and above-mentioned Motion mask concentrate), above-mentioned motion target tracking method also can further comprise: the predictive displacement (dx, dy) comprising in above-mentioned the first Motion mask is upgraded as follows:
dx = x j - x i dy = y j - y i , (x wherein j, y j) be the reference point coordinate of above-mentioned moving target, (x i, y i) be the reference point coordinate of movement locus array up-to-date interpolation before the reference point coordinate that adds above-mentioned moving target of above-mentioned the first Motion mask.
Optionally, in possible embodiments more of the present invention, above-mentioned based on above-mentioned matching degree, determine the first Motion mask mating with above-mentioned moving target after, above-mentioned motion target tracking method also can further comprise: the degree of confidence Threshold_In1 that above-mentioned the first Motion mask recording in above-mentioned the first Motion mask is become to motion tracking template adds s1, and above-mentioned s1 is positive integer.It is 1,2,3 or 4,6,8,10,20 or 51 or other positive integers that above-mentioned s1 for example can equal.
Concentrate the degree of confidence Threshold_In1 that becomes motion tracking template recording in other Motion masks except above-mentioned the first Motion mask to subtract s1 above-mentioned Motion mask, the degree of confidence Threshold_In2 that the Motion mask recording in above-mentioned other Motion masks disappears subtracts s2, and above-mentioned s2 is positive integer.It is 1,2,3,4,6,8,10,20 or 51 or other positive integers that above-mentioned s2 for example can equal.
Further, the degree of confidence Threshold_In2 that the Motion mask that certain Motion mask of concentrating when above-mentioned Motion mask records disappears is less than or equal to S22 (S22 is for example less than or equal to 0), this Motion mask can be concentrated and is rejected from above-mentioned Motion mask.
Further, also can utilize the area of above-mentioned moving target to upgrade the area of the moving target recording in above-mentioned the first Motion mask, also can utilize the position of above-mentioned moving target to upgrade the position of the moving target recording in above-mentioned the first Motion mask.
Optionally, in some possible embodiments of the present invention, in the above-mentioned movement locus array based in above-mentioned the first Motion mask, the reference point coordinate of current record is determined the movement locus of above-mentioned moving target, can comprise:
Calculate the direction gradient between adjacent two reference point coordinates in P1 the reference point coordinate recording the earliest in the movement locus array in above-mentioned the first Motion mask, to obtain P1-1 direction gradient, calculate the angle of the adjacent direction gradient in an above-mentioned P1-1 direction gradient, to obtain P1-2 angle, above-mentioned P1 is greater than 2 positive integer.
Calculate the direction gradient between adjacent two reference point coordinates in P2 the reference point coordinate recording the latest in the movement locus array in above-mentioned the first Motion mask, to obtain P2-1 direction gradient, calculate the angle of the adjacent direction gradient in an above-mentioned P2-1 direction gradient, to obtain P2-2 angle, above-mentioned P2 is greater than 2 positive integer.
If be greater than the angle quantity of the first angle threshold value in an above-mentioned P1-2 angle, be greater than P3, if and the angle quantity that is greater than the second angle threshold value in an above-mentioned P2-2 angle is greater than P4, and the area of the corresponding movement locus of the reference point coordinate region of the record of the movement locus array in above-mentioned the first Motion mask is greater than the first area threshold, utilize the reference point coordinate of current record in the movement locus array in above-mentioned the first Motion mask to draw the movement locus that obtains above-mentioned moving target.
Wherein, two adjacent reference point coordinates of time that in movement locus array in the first Motion mask, adjacent two reference point coordinates refer to the movement locus array adding in the first Motion mask, and adjacent direction gradient refers to the both direction gradient that three the adjacent reference point coordinates of time based on adding the movement locus array in the first Motion mask calculate.For example, reference point coordinate f1, f2 and f2 are three reference point coordinates adjacent in the movement locus array in the first Motion mask, direction gradient between hypothetical reference point coordinate f1 and f2 is f1_2, direction gradient between reference point coordinate f2 and f3 is f2_3, and direction gradient is that f2_3 and direction gradient are that f1_2 is adjacent direction gradient.For ease of record, add the more late reference point coordinate of time of the movement locus array in Motion mask (the first Motion mask), the numbering in its movement locus array in this Motion mask is just larger or less.
Wherein, the corresponding movement locus of the reference point coordinate region of movement locus array in the first Motion mask record, can be the maximum circumscribed rectangular region at the corresponding movement locus of the reference point coordinate place of the movement locus array record in the first Motion mask.
Wherein, the span of the first angle threshold value for example can be [0 °, 180 °], and preferably span for example can be [30 °, 90 °], specifically can equal 30 °, 38 °, 45 °, 60 °, 70 °, 90 ° or other angles.
Wherein, the span of the second angle threshold value for example can be [0 °, 180 °], and preferably span for example can be [30 °, 90 °], specifically can equal 30 °, 38 °, 48 °, 65 °, 77 °, 90 ° or other angles.
Wherein, the span of the first area threshold for example can be [10,50], and concrete example is as equaled 10,15 or 21,25,30,35,40,44,48 or 50 or other values.
Wherein, suppose X minminimum X coordinate figure in the reference point coordinate of expression movement locus array record, X maxmaximum X coordinate figure in the reference point coordinate of expression movement locus array record.Y minminimum Y coordinate figure in the reference point coordinate of expression movement locus array record, Y maxmaximum Y coordinate figure in the reference point coordinate of expression movement locus array record.Wherein, (X min, Y min) and (X max, Y max) between Euclidean distance, can be the area of the corresponding movement locus of the reference point coordinate region of movement locus array in above-mentioned the first Motion mask record.
Wherein, the angle of both direction gradient obtains by both direction gradient folder cosine of an angle, and the folder cosine of an angle of direction gradient equals the inner product of direction gradient divided by the product of the mould of two direction gradients, and specific formula for calculation can be as follows:
Angle i = a cos ( ( dx i + 1 * dx i + dy i + 1 * dy i ) / ( dx i + 1 2 + dy i + 1 2 * dx i 2 + dy i 2 ) ) * 180 / &pi; ;
Wherein, (dx i, dy i) be current direction gradient, (dx i+1, dy i+1) be adjacent next direction gradient.
Wherein, P1 and P2 for example can equal 10,12,15,8, and P3 can equal 6 or 5 or 4, P4 can equal 3 or 4,5,6.If Motion mask does not have locked, the reference position of tracing point is added to 2, initial two points of current calculating location have been skipped in the starting position of the tracing point when next calculated direction gradient, that is to say StartPos=StartPos+2.
Optionally, in some possible embodiments of the present invention, point coordinate or center of mass point coordinate or other point coordinate centered by above-mentioned reference point coordinate.For example the reference point coordinate of moving target can be the center point coordinate of moving target or other point coordinate on center of mass point coordinate or moving target.
If be appreciated that in foreground picture and comprise a plurality of moving targets, each moving target all can be followed the tracks of in the manner described above.
For ease of better understanding and implement the such scheme of the embodiment of the present invention, below in conjunction with the introduction of giving an example of some concrete application scenarioss.
Referring to Fig. 2, the schematic flow sheet of the another kind of motion target tracking method that Fig. 2 provides for an alternative embodiment of the invention.As shown in Figure 2, the another kind of motion target tracking manner of execution that an alternative embodiment of the invention provides can comprise following content:
201, from video sequence, obtain image.
202, obtain the Background of above-mentioned image.
203, based on above-mentioned image and above-mentioned background figure, obtain the foreground picture of above-mentioned image.
Background does not comprise moving target, and Background can be divided into gray scale Background and color background figure.Foreground picture is by the image of moving target mark, as non-motion target area is all marked as to black, motion target area is all marked as to white.For example background modeling method can adopt mixed Gaussian background modeling algorithm, and reading images from video sequence, utilizes image and mixed Gaussian background modeling algorithm to set up Background.Image and Background are processed acquisition foreground picture by background difference algorithm to image and Background,
Wherein, the foreground picture of image may comprise one or more moving targets, does not also likely comprise any moving target.Wherein, in the present embodiment, mainly take scene that foreground picture comprises b moving target describes as example.
204, above-mentioned foreground picture is carried out to morphology processing.
Wherein, in possible embodiments more of the present invention, above-mentioned foreground picture is carried out to morphology and process and can comprise: above-mentioned foreground picture is carried out to following at least one morphology and process: filtering processing, dilation erosion are processed, opening operation is processed and closed operation is processed.
For example, if foreground picture has comprised the larger noise that obtains, can carry out filtering processing and dilation erosion processing to foreground picture.Can first to foreground picture, carry out medium filtering processing to remove noise, then filtering be processed to foreground picture later and carry out first expansion process post-etching processing, to remove the cavity of moving target, and the fragment of moving target be merged.Suppose that medium filtering template is the template of n*n, can utilize medium filtering template to scan foreground picture.Wherein.The preferred value of this n is odd number, and n value is such as can 3,5,7,9,11 or 13 etc.
When in foreground picture corresponding to medium filtering template, the pixel value of relevant position is following,
A = a 11 a 12 . . . a 1 n a 21 a 22 . . . a 2 n . . . . . . . . . . . . a n 1 a n 2 . . . a nn , First can sort to this n*n pixel value, then choose intermediate value replace in foreground picture corresponding to the pixel value of position.The medium filtering template that for example medium filtering template is 3*3, foreground picture to two-value is done medium filtering, to have at least five pixels are foreground points to nine neighborhood territory pixel points, and this point just can be judged as foreground point, and this mode can be removed some independently little noises.
Optionally, dilation and erosion can adopt the template of 3*3, Mask = 255 255 255 255 255 255 255 255 255 , Initial point Wei Gai template center point wherein.Wherein, expansion is that the respective pixel value of template and foreground picture is done and computing, if 0, and the value assignment of the bianry image position that initial point is corresponding is 0, otherwise is 255.Corrosion is that template is done and computing with the pixel value of corresponding bianry image, if 255, and the value assignment of the bianry image position that initial point is corresponding is 255, otherwise is 0.Certainly, under some application scenarios, also can adopt other modes to carry out dilation and erosion operation.
205, the above-mentioned foreground picture carrying out after morphology processing is carried out to b the moving target that connected region extraction operation comprises to obtain above-mentioned foreground picture.
206, based on Motion mask corresponding to an above-mentioned b moving target, determine the movement locus of an above-mentioned b moving target.
Optionally, in possible embodiments more of the present invention, the movement locus of determining above-mentioned moving target based on Motion mask corresponding to an above-mentioned b moving target, can comprise: calculate the Euclidean distance between each concentrated Motion mask of an above-mentioned b moving target and Motion mask, above-mentioned Euclidean distance between each Motion mask that above-mentioned b moving target based on calculating and Motion mask are concentrated, determine the matching degree between each concentrated Motion mask of an above-mentioned b moving target and Motion mask, if determine the first Motion mask mating with moving target j in an above-mentioned b moving target based on above-mentioned matching degree, the reference point coordinate of above-mentioned moving target j is added in the movement locus array in above-mentioned the first Motion mask, in movement locus array based in above-mentioned the first Motion mask, the reference point coordinate of current record is determined the movement locus of above-mentioned moving target j, above-mentioned the first Motion mask is one of them Motion mask that above-mentioned Motion mask is concentrated.
In addition, above-mentioned motion target tracking method also can further comprise: if determine above-mentioned moving target j any one Motion mask concentrated with above-mentioned Motion mask based on above-mentioned matching degree, all do not mate, can generate the first Motion mask that above-mentioned moving target j is corresponding, above-mentioned the first Motion mask is added among above-mentioned Motion mask collection, wherein, in the movement locus array of above-mentioned the first Motion mask, recorded the reference point coordinate of above-mentioned moving target j.
Optionally, in possible embodiments more of the present invention, calculate the Euclidean distance between each concentrated Motion mask of an above-mentioned b moving target and Motion mask, can comprise: utilize Motion mask collection and above-mentioned b moving target structure Distance matrix D ist.
Wherein, suppose that number that above-mentioned Motion mask integrates the current Motion mask comprising is as m.
Dist = D 11 D 12 . . . D 1 b D 21 D 22 . . . D 2 b . . . . . . . . . . . . D m 1 D m 2 . . . D mb ;
Wherein, the line number m of above-mentioned Distance matrix D ist equals the number of the current Motion mask comprising of above-mentioned Motion mask collection; The number of the moving target that the above-mentioned foreground picture that the columns b of above-mentioned Distance matrix D ist equals to obtain comprises; Element D in above-mentioned Dist ijrepresent the Euclidean distance between the concentrated Motion mask i of moving target j in an above-mentioned b moving target and above-mentioned Motion mask.For example, D 24represent the Euclidean distance between the concentrated Motion mask 2 of moving target 4 in an above-mentioned b moving target and above-mentioned Motion mask.
Wherein, above-mentioned D ifor example can calculate by following formula,
D i = ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 , if ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 < T T , other
Wherein, above-mentioned (x i, y i) be the reference point coordinate of the concentrated Motion mask i of above-mentioned Motion mask, above-mentioned (x j, y j) be the reference point coordinate of above-mentioned moving target, above-mentioned (dx, dy) predictive displacement for recording in above-mentioned Motion mask i, above-mentioned T is first threshold, the span of first threshold T for example can be [20,150], concrete example is as can be 20,25,30,35,40,50,120,140 or 150 etc.
For example, suppose that the number of the current Motion mask comprising of above-mentioned Motion mask collection is 3, the number of the moving target that the above-mentioned foreground picture obtaining comprises is b,,
Dist = D 11 D 12 D 13 D 14 D 15 D 21 D 22 D 23 D 24 D 25 D 31 D 32 D 33 D 34 D 35 .
Optionally, in possible embodiments more of the present invention, above-mentioned Euclidean distance between each Motion mask that above-mentioned b moving target based on calculating and Motion mask are concentrated, determines the matching degree between each concentrated Motion mask of an above-mentioned b moving target and Motion mask, can comprise:
Based on above-mentioned distance matrix structure coupling matrix M atch;
Match = M 11 M 12 . . . M 1 b M 21 M 22 . . . M 2 b . . . . . . . . . . . . M m 1 M m 2 . . . M mb ;
Wherein, the line number m of above-mentioned coupling matrix M atch equals the number of the current Motion mask comprising of above-mentioned Motion mask collection; Above-mentioned coupling matrix M atch the number of the moving target that comprises of the columns b above-mentioned foreground picture that equals to obtain.Element M in above-mentioned Match ijrepresent the matching degree between the concentrated Motion mask i of moving target j in an above-mentioned b moving target and above-mentioned Motion mask, wherein, matching degree between the concentrated Motion mask i of above-mentioned moving target j and above-mentioned Motion mask, determines based on Euclidean distance between above-mentioned moving target j and the concentrated Motion mask i of above-mentioned Motion mask.
Optionally, in possible embodiments more of the present invention, above-mentioned M ijby following formula, calculate,
Wherein, element M kjrepresent the matching degree between the concentrated Motion mask k of above-mentioned moving target j and above-mentioned Motion mask.Wherein, a1 can be positive number, and for example a1 can equal 0.2,1,2,3,4.5,8.3,10 or other values.Wherein, in coupling matrix M atch, the initial value of each element can be 0, for the coupling every row of matrix M atch and every column count M ijafterwards, 0≤M ij≤ 2*a1.
Wherein, the M in coupling matrix M atch ijequal 2*a1, think that the above-mentioned moving target j Motion mask i concentrated with above-mentioned Motion mask mates.
Optionally, in possible embodiments more of the present invention, if after determining based on above-mentioned matching degree the first Motion mask mating with above-mentioned moving target j, above-mentioned motion target tracking method also can further comprise: the predictive displacement (dx, dy) comprising in above-mentioned the first Motion mask is upgraded as follows:
dx = x j - x i dy = y j - y i , (x wherein j, y j) be the reference point coordinate of above-mentioned moving target j, (x i, y i) be the reference point coordinate of movement locus array up-to-date interpolation before the reference point coordinate that adds above-mentioned moving target j of above-mentioned the first Motion mask.
Optionally, in possible embodiments more of the present invention, above-mentioned based on above-mentioned matching degree, determine the first Motion mask mating with above-mentioned moving target after, above-mentioned motion target tracking method also can further comprise: the degree of confidence Threshold_In1 that above-mentioned the first Motion mask recording in above-mentioned the first Motion mask is become to motion tracking template adds s1, and above-mentioned s1 is positive integer.It is 1,2,3 or 4,6,8,10,20 or 51 or other positive integers that above-mentioned s1 for example can equal.
Optionally, in possible embodiments more of the present invention, above-mentioned Motion mask is concentrated the degree of confidence Threshold_In1 that becomes motion tracking template recording in the Motion mask not mating with an above-mentioned b moving target subtract s1, Motion mask is concentrated the degree of confidence Threshold_In2 of the Motion mask disappearance of recording in the Motion mask not mating with an above-mentioned b moving target subtract s2, above-mentioned s2 is positive integer.It is 1,2,3,4,6,8,10,20 or 51 or other positive integers that above-mentioned s2 for example can equal.
Further, the degree of confidence Threshold_In2 that the Motion mask that certain Motion mask of concentrating when above-mentioned Motion mask records disappears is less than or equal to S22 (S22 is for example less than or equal to 0), this Motion mask can be concentrated and is rejected from above-mentioned Motion mask.
Further, also can utilize the area of above-mentioned moving target to upgrade the area of the moving target recording in above-mentioned the first Motion mask, also can utilize the position of above-mentioned moving target to upgrade the position of the moving target recording in above-mentioned the first Motion mask.
Optionally, in some possible embodiments of the present invention, in the above-mentioned movement locus array based in above-mentioned the first Motion mask, the reference point coordinate of current record is determined the movement locus of above-mentioned moving target, can comprise: the degree of confidence Threshold_In1 of the motion tracking template recording in above-mentioned the first Motion mask is greater than threshold value Thr1 (wherein, this threshold value Thr1 for example can be 8, 10.11, 15 etc., the initial value of Threshold_In1 for example can be 0, 1.2 etc.), and/or the degree of confidence Threshold_In2 that the Motion mask recording in above-mentioned the first Motion mask disappears is greater than threshold value Thr2 (wherein, this threshold value Thr2 for example can be 20, 22 or 25 etc., the initial value of Threshold_In2 is for example 25) the reference point coordinate of current record is determined the movement locus of above-mentioned moving target in movement locus array based in above-mentioned the first Motion mask.
Optionally, in some possible embodiments of the present invention, in the above-mentioned movement locus array based in above-mentioned the first Motion mask, the reference point coordinate of current record is determined the movement locus of above-mentioned moving target, can comprise:
Calculate the direction gradient between adjacent two reference point coordinates in P1 the reference point coordinate recording the earliest in the movement locus array in above-mentioned the first Motion mask, to obtain P1-1 direction gradient, calculate the angle of the adjacent direction gradient in an above-mentioned P1-1 direction gradient, to obtain P1-2 angle, above-mentioned P1 is greater than 2 positive integer.
Calculate the direction gradient between adjacent two reference point coordinates in P2 the reference point coordinate recording the latest in the movement locus array in above-mentioned the first Motion mask, to obtain P2-1 direction gradient, calculate the angle of the adjacent direction gradient in an above-mentioned P2-1 direction gradient, to obtain P2-2 angle, above-mentioned P2 is greater than 2 positive integer.Wherein, two adjacent reference point coordinates of time that in movement locus array in the first Motion mask, adjacent two reference point coordinates refer to the movement locus array adding in the first Motion mask, and adjacent direction gradient refers to the both direction gradient that three the adjacent reference point coordinates of time based on adding the movement locus array in the first Motion mask calculate.For example, reference point coordinate f1, f2 and f2 are three reference point coordinates adjacent in the movement locus array in the first Motion mask, direction gradient between hypothetical reference point coordinate f1 and f2 is f1_2, direction gradient between reference point coordinate f2 and f3 is f2_3, and direction gradient is that f2_3 and direction gradient are that f1_2 is adjacent direction gradient.For ease of record, add the more late reference point coordinate of time of the movement locus array in Motion mask (the first Motion mask), the numbering in its movement locus array in this Motion mask is just larger or less.
If be greater than the angle quantity of the first angle threshold value in an above-mentioned P1-2 angle, be greater than P3, if and the angle quantity that is greater than the second angle threshold value in an above-mentioned P2-2 angle is greater than P4, and the area of the corresponding movement locus of the reference point coordinate region of the record of the movement locus array in above-mentioned the first Motion mask is greater than the first area threshold, utilize the reference point coordinate of current record in the movement locus array in above-mentioned the first Motion mask to draw the movement locus that obtains above-mentioned moving target.
Wherein, the corresponding movement locus of the reference point coordinate region of movement locus array in the first Motion mask record, can be the maximum circumscribed rectangular region at the corresponding movement locus of the reference point coordinate place of the movement locus array record in the first Motion mask.
Wherein, the span of the first angle threshold value for example can be [0 °, 180 °], and preferably span for example can be [30 °, 90 °], specifically can equal 30 °, 38 °, 45 °, 60 °, 70 °, 90 ° or other angles.
Wherein, the span of the second angle threshold value for example can be [0 °, 180 °], and preferably span for example can be [30 °, 90 °], specifically can equal 30 °, 38 °, 48 °, 65 °, 77 °, 90 ° or other angles.
Wherein, the span of the first area threshold for example can be [10,50], and concrete example is as equaled 10,15 or 21,25,30,35,40,44,48 or 50 or other values.
Wherein, suppose X minminimum X coordinate figure in the reference point coordinate of expression movement locus array record, X maxmaximum X coordinate figure in the reference point coordinate of expression movement locus array record.Y minminimum Y coordinate figure in the reference point coordinate of expression movement locus array record, Y maxmaximum Y coordinate figure in the reference point coordinate of expression movement locus array record.Wherein, (X min, Y min) and (X max, Y max) between Euclidean distance, can be the area of the corresponding movement locus of the reference point coordinate region of movement locus array in above-mentioned the first Motion mask record.
Wherein, the angle of both direction gradient obtains by both direction gradient folder cosine of an angle, and the folder cosine of an angle of direction gradient equals the inner product of direction gradient divided by the product of the mould of two direction gradients, and specific formula for calculation can be as follows:
Angle i = a cos ( ( dx i + 1 * dx i + dy i + 1 * dy i ) / ( dx i + 1 2 + dy i + 1 2 * dx i 2 + dy i 2 ) ) * 180 / &pi; ;
Wherein, (dx i, dy i) be current direction gradient, (dx i+1, dy i+1) be adjacent next direction gradient.
Wherein, P1 and P2 for example can equal 10,12,15,8, and P3 can equal 6 or 5 or 4, P4 can equal 3 or 4,5,6.If Motion mask does not have locked, the reference position of tracing point is added to 2, initial two points of current calculating location have been skipped in the starting position of the tracing point when next calculated direction gradient, that is to say StartPos=StartPos+2.
Optionally, in some possible embodiments of the present invention, point coordinate or center of mass point coordinate or other point coordinate centered by above-mentioned reference point coordinate.For example the reference point coordinate of moving target can be the center point coordinate of moving target or other point coordinate on center of mass point coordinate or moving target.
Be appreciated that the tracking mode for b moving target, all can be similar to the above-mentioned tracking mode to moving target j of giving an example.
Can find out, in the present embodiment, the foreground picture of image carried out after morphology processing; The above-mentioned foreground picture carrying out after morphology processing is carried out to b the moving target that connected region extraction operation comprises to obtain above-mentioned foreground picture; Based on b Motion mask corresponding to moving target, determine the part or all of movement locus in b moving target.Owing to being that such scheme is that moving target integral body is carried out image tracing for granularity, need each pixel to follow the tracks of computing as granularity with prior art and compare, technique scheme of the present invention is conducive to the computation complexity of larger reduction pursuit movement target.
Also be provided for implementing the relevant apparatus of such scheme below.
Referring to Fig. 3, a kind of motion target tracking device 300 provided by the invention, can comprise:
Acquiring unit 301, acquisition unit 302, processing unit 303, extraction unit 304 and tracking treatment unit 305.
Wherein, acquiring unit 301, for obtaining image from video sequence.
Obtain unit 302, for obtaining the foreground picture of above-mentioned image.
Processing unit 303, for carrying out morphology processing to above-mentioned foreground picture.
Extraction unit 304, for carrying out to the above-mentioned foreground picture carrying out after morphology processing the moving target that connected region extraction operation comprises to obtain above-mentioned foreground picture.
Tracking treatment unit 305, for determining the movement locus of above-mentioned moving target based on Motion mask corresponding to above-mentioned moving target.
Optionally, in possible embodiments more of the present invention,
Above-mentioned tracking treatment unit 305 specifically for, calculate the Euclidean distance between each Motion mask that above-mentioned moving target and Motion mask concentrate; Above-mentioned Euclidean distance between each Motion mask of concentrating based on the above-mentioned above-mentioned moving target calculating and Motion mask, determines the matching degree between each Motion mask among above-mentioned moving target and above-mentioned Motion mask collection; If determine the first Motion mask mating with above-mentioned moving target based on above-mentioned matching degree, the reference point coordinate of above-mentioned moving target is added in the movement locus array in above-mentioned the first Motion mask, in movement locus array based in above-mentioned the first Motion mask, the reference point coordinate of current record is determined the movement locus of above-mentioned moving target, wherein, above-mentioned the first Motion mask is one of them Motion mask among above-mentioned Motion mask collection.
Optionally, in possible embodiments more of the present invention,
Above-mentioned tracking treatment unit 305 also for, if determining above-mentioned moving target any one Motion mask concentrated with above-mentioned Motion mask based on above-mentioned matching degree does not all mate, generate the first Motion mask that above-mentioned moving target is corresponding, above-mentioned the first Motion mask is added among above-mentioned Motion mask collection, wherein, in the movement locus array of above-mentioned the first Motion mask, recorded the reference point coordinate of above-mentioned moving target.
Optionally, in possible embodiments more of the present invention, aspect Euclidean distance between each Motion mask of concentrating at the above-mentioned moving target of above-mentioned calculating and Motion mask, above-mentioned tracking treatment unit 305 specifically for, utilize Motion mask collection and above-mentioned moving target structure Distance matrix D ist;
Dist = D 1 D 2 . . . D m ;
Wherein, the line number m of above-mentioned distance matrix equals the number of the current Motion mask comprising of above-mentioned Motion mask collection; Element D in above-mentioned Dist irepresent the Euclidean distance between the concentrated Motion mask i of above-mentioned moving target and above-mentioned Motion mask.
Optionally, in possible embodiments more of the present invention, above-mentioned D ican calculate by following formula,
D i = ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 , if ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 < T T , other
Wherein, above-mentioned T is first threshold;
Above-mentioned (x j, y j) be the reference point coordinate of above-mentioned moving target;
Above-mentioned (x i, y i) be the reference point coordinate of the concentrated Motion mask i of above-mentioned Motion mask;
Above-mentioned (dx, dy) predictive displacement for recording in above-mentioned Motion mask i.
Optionally, in possible embodiments more of the present invention, above-mentioned Euclidean distance between each Motion mask of concentrating at above-mentioned above-mentioned moving target based on calculating and Motion mask, determine the aspect of the matching degree between each concentrated Motion mask of above-mentioned moving target and Motion mask, above-mentioned tracking treatment unit 305 specifically for, based on above-mentioned distance matrix structure coupling matrix M atch;
Match = M 1 M 2 . . . M m
Wherein, the line number m of above-mentioned coupling matrix equals the number of the current Motion mask comprising of above-mentioned Motion mask collection; Element M in above-mentioned Match irepresent the matching degree between the concentrated Motion mask i of above-mentioned moving target and above-mentioned Motion mask, wherein, matching degree between the concentrated Motion mask i of above-mentioned moving target and above-mentioned Motion mask, determines based on Euclidean distance between above-mentioned moving target and the concentrated Motion mask i of above-mentioned Motion mask.
Optionally, in possible embodiments more of the present invention,
Above-mentioned M iby following formula, calculate,
Optionally, in possible embodiments more of the present invention, after determining based on above-mentioned matching degree the first Motion mask mating with above-mentioned moving target if above-mentioned, above-mentioned tracking treatment unit 305 is also upgraded as follows for the predictive displacement (dx, dy) that above-mentioned the first Motion mask is comprised:
dx = x i 1 - x i dy = y i 1 - y i , (x wherein i1, y i1) be the reference point coordinate of above-mentioned moving target, (x i, y i) be the reference point coordinate of movement locus array up-to-date interpolation before the reference point coordinate that adds above-mentioned moving target of above-mentioned the first Motion mask.
Optionally, in possible embodiments more of the present invention, above-mentioned based on above-mentioned matching degree, determine the first Motion mask mating with above-mentioned moving target after, the degree of confidence Threshold_In1 that above-mentioned tracking treatment unit 305 also becomes motion tracking template for above-mentioned the first Motion mask that above-mentioned the first Motion mask is recorded adds s1;
Concentrate the degree of confidence Threshold_In1 that becomes motion tracking template recording in other Motion masks except above-mentioned the first Motion mask to subtract s1 above-mentioned Motion mask, the degree of confidence Threshold_In2 that the Motion mask recording in above-mentioned other Motion masks disappears subtracts s2, wherein, above-mentioned s1 is positive integer, and above-mentioned s2 is positive integer.
Optionally, in possible embodiments more of the present invention, in the above-mentioned movement locus array based in above-mentioned the first Motion mask the reference point coordinate of current record determine above-mentioned moving target movement locus aspect, above-mentioned tracking treatment unit 305 specifically for
Calculate the direction gradient between adjacent two reference point coordinates in P1 the reference point coordinate recording the earliest in the movement locus array in above-mentioned the first Motion mask, to obtain P1-1 direction gradient, calculate the angle of the adjacent direction gradient in an above-mentioned P1-1 direction gradient, to obtain P1-2 angle, above-mentioned P1 is greater than 2 positive integer;
Calculate the direction gradient between adjacent two reference point coordinates in P2 the reference point coordinate recording the latest in the movement locus array in above-mentioned the first Motion mask, to obtain P2-1 direction gradient, calculate the angle of the adjacent direction gradient in an above-mentioned P2-1 direction gradient, to obtain P2-2 angle, above-mentioned P2 is greater than 2 positive integer;
If be greater than the angle quantity of the first angle threshold value in an above-mentioned P1-2 angle, be greater than P3, if and the angle quantity that is greater than the second angle threshold value in an above-mentioned P2-2 angle is greater than P4, and the area of the corresponding movement locus of the reference point coordinate region of the record of the movement locus array in above-mentioned the first Motion mask is greater than the first area threshold, utilize the reference point coordinate of current record in the movement locus array in above-mentioned the first Motion mask to draw the movement locus that obtains above-mentioned moving target.
Optionally, in possible embodiments more of the present invention, point coordinate or center of mass point coordinate centered by above-mentioned reference point coordinate.
Be understandable that, the function of each functional module of the motion target tracking device 300 of the present embodiment can be according to the method specific implementation in said method embodiment, and its specific implementation process can, with reference to the associated description of said method embodiment, repeat no more herein.
Can find out, in the present embodiment, the foreground picture of image carried out after morphology processing; The above-mentioned foreground picture carrying out after morphology processing is carried out to the moving target that connected region extraction operation comprises to obtain above-mentioned foreground picture; Based on Motion mask corresponding to above-mentioned moving target, determine the movement locus of above-mentioned moving target.Owing to being that such scheme is to take moving target integral body to carry out image tracing as granularity, need to using each pixel with prior art and as granularity, follow the tracks of computing and compare, technique scheme of the present invention is conducive to the computation complexity of larger reduction pursuit movement target.
Referring to Fig. 4, Fig. 4 is the structured flowchart of the motion target tracking device 400 that provides of an alternative embodiment of the invention.
Motion target tracking device 400 can comprise: at least 1 processor 401, storer 405 and at least 1 communication bus 402.Communication bus 402 is for realizing the connection communication between these assemblies.
Optionally, this motion target tracking device 400 also can comprise: at least 1 network interface 404 and user interface 403 etc.Wherein, optionally, user interface 403 comprises that display is (as touch-screen, liquid crystal display or holographic imaging (English: Holographic) or projection (English: Projector) etc.), pointing device (mouse for example, trace ball (English: trackball) touch-sensitive plate or touch-screen etc.), camera and/or sound pick up equipment etc.
Wherein, storer 405 can comprise ROM (read-only memory) and random access memory, and provides instruction and data to processor 401.A part in storer 405 can also comprise nonvolatile RAM.
In some possible embodiments, storer 405 has been stored following element, executable module or data structure, or their subset, or their superset:
Operating system and application program.
Wherein, application program can comprise acquiring unit 301, obtain unit 302, processing unit 303, extraction unit 304 and tracking treatment unit 305 etc.
In embodiments of the present invention, code or instruction in processor 401 execute stores 405, for obtaining image from video sequence; Obtain the foreground picture of above-mentioned image; Above-mentioned foreground picture is carried out to morphology processing; The above-mentioned foreground picture carrying out after morphology processing is carried out to the moving target that connected region extraction operation comprises to obtain above-mentioned foreground picture; Based on Motion mask corresponding to above-mentioned moving target, determine the movement locus of above-mentioned moving target.
Wherein, the foreground picture of image may comprise one or more moving targets, does not also likely comprise any moving target.The scene that the foreground picture of mainly take in the embodiment of the present invention comprises one or more moving targets describes as example.
Optionally, in possible embodiments more of the present invention, the foreground picture that processor 401 obtains above-mentioned image for example can comprise: processor 401 obtains the Background of above-mentioned image, obtains the foreground picture of above-mentioned image based on above-mentioned image and above-mentioned background figure.Wherein, Background does not comprise moving target, and Background can be divided into gray scale Background and color background figure.Wherein, foreground picture is by the image of moving target mark, as non-motion target area is all marked as to black, motion target area is all marked as to white.For example background modeling method can adopt mixed Gaussian background modeling algorithm, and reading images from video sequence, utilizes image and mixed Gaussian background modeling algorithm to set up Background.Image and Background are processed acquisition foreground picture by background difference algorithm to image and Background, and the foreground picture of image may comprise one or more moving targets, also may not comprise any moving target.The scene that the foreground picture of mainly take in the embodiment of the present invention comprises one or more moving targets describes as example.
Wherein, in possible embodiments more of the present invention, 401 pairs of above-mentioned foreground pictures of processor carry out morphology to be processed and can comprise: 401 pairs of above-mentioned foreground pictures of processor carry out following at least one morphology to be processed: filtering processing, dilation erosion are processed, opening operation is processed and closed operation is processed.
For example, if foreground picture has comprised the larger noise that obtains, can carry out filtering processing and dilation erosion processing to foreground picture.Can first to foreground picture, carry out medium filtering processing to remove noise, then filtering be processed to foreground picture later and carry out first expansion process post-etching processing, to remove the cavity of moving target, and the fragment of moving target be merged.Suppose that medium filtering template is the template of n*n, can utilize medium filtering template to scan foreground picture.Wherein.The preferred value of this n is odd number, and n value is such as can 3,5,7,9,11 or 13 etc.
When in foreground picture corresponding to medium filtering template, the pixel value of relevant position is following,
A = a 11 a 12 . . . a 1 n a 21 a 22 . . . a 2 n . . . . . . . . . . . . a n 1 a n 2 . . . a nn , First can sort to this n*n pixel value, then choose intermediate value replace in foreground picture corresponding to the pixel value of position.The medium filtering template that for example medium filtering template is 3*3, foreground picture to two-value is done medium filtering, to have at least five pixels are foreground points to nine neighborhood territory pixel points, and this point just can be judged as foreground point, and this mode can be removed some independently little noises.
Optionally, dilation and erosion can adopt the template of 3*3, Mask = 255 255 255 255 255 255 255 255 255 , Initial point Wei Gai template center point wherein.Wherein, expansion is that the respective pixel value of template and foreground picture is done and computing, if 0, and the value assignment of the bianry image position that initial point is corresponding is 0, otherwise is 255.Corrosion is that template is done and computing with the pixel value of corresponding bianry image, if 255, and the value assignment of the bianry image position that initial point is corresponding is 255, otherwise is 0.Certainly, under some application scenarios, also can adopt other modes to carry out dilation and erosion operation.
Optionally, in possible embodiments more of the present invention, processor 401 is determined the movement locus of above-mentioned moving target based on Motion mask corresponding to above-mentioned moving target, can comprise: the Euclidean distance between each Motion mask that the processor 401 above-mentioned moving targets of calculating and Motion mask are concentrated, above-mentioned Euclidean distance between each Motion mask that above-mentioned moving target based on calculating and Motion mask are concentrated, determine the matching degree between each concentrated Motion mask of above-mentioned moving target and Motion mask, if determine the first Motion mask mating with above-mentioned moving target based on above-mentioned matching degree, the reference point coordinate of above-mentioned moving target is added in the movement locus array in above-mentioned the first Motion mask, in movement locus array based in above-mentioned the first Motion mask, the reference point coordinate of current record is determined the movement locus of above-mentioned moving target, wherein, above-mentioned the first Motion mask is one of them Motion mask that above-mentioned Motion mask is concentrated.
In addition, if also can be used for determining above-mentioned moving target any one Motion mask concentrated with above-mentioned Motion mask based on above-mentioned matching degree, processor 401 all do not mate, can generate the first Motion mask that above-mentioned moving target is corresponding, above-mentioned the first Motion mask is added among above-mentioned Motion mask collection, wherein, in the movement locus array of above-mentioned the first Motion mask, recorded the reference point coordinate of above-mentioned moving target.
Optionally, in possible embodiments more of the present invention, the Euclidean distance between each Motion mask that the processor 401 above-mentioned moving targets of calculating and Motion mask are concentrated, can comprise:
Utilize Motion mask collection and above-mentioned moving target structure Distance matrix D ist;
Dist = D 1 D 2 . . . D m ;
Wherein, the line number m of above-mentioned distance matrix equals the number of the current Motion mask comprising of above-mentioned Motion mask collection; Element D in above-mentioned Dist irepresent the Euclidean distance between the concentrated Motion mask i of above-mentioned moving target and above-mentioned Motion mask.
Wherein, above-mentioned D ifor example can calculate by following formula,
D i = ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 , if ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 < T T , other
Wherein, above-mentioned (x i, y i) be the reference point coordinate of the concentrated Motion mask i of above-mentioned Motion mask, above-mentioned (x j, y j) be the reference point coordinate of above-mentioned moving target, above-mentioned (dx, dy) predictive displacement for recording in above-mentioned Motion mask i, above-mentioned T is first threshold, the span of first threshold T for example can be [20,150], concrete example is as can be 20,25,30,35,40,50,120,140 or 150 etc.
Optionally, in possible embodiments more of the present invention, above-mentioned Euclidean distance between each Motion mask that the above-mentioned moving target of processor 401 based on calculating and Motion mask are concentrated, determine the matching degree between each concentrated Motion mask of above-mentioned moving target and Motion mask, can comprise:
Based on above-mentioned distance matrix structure coupling matrix M atch;
Match = M 1 M 2 . . . M m
Wherein, the line number m of above-mentioned coupling matrix M atch equals the number of the current Motion mask comprising of above-mentioned Motion mask collection; Element M in above-mentioned coupling matrix M atch irepresent the matching degree between the concentrated Motion mask i of above-mentioned moving target and above-mentioned Motion mask, wherein, matching degree between the concentrated Motion mask i of above-mentioned moving target and above-mentioned Motion mask, determines based on Euclidean distance between above-mentioned moving target and the concentrated Motion mask i of above-mentioned Motion mask.
Optionally, in possible embodiments more of the present invention, above-mentioned M iby following formula, calculate,
Wherein, element M krepresent the matching degree between the concentrated Motion mask k of above-mentioned moving target and above-mentioned Motion mask.
Optionally, in other possible embodiments of the present invention, above-mentioned M ialso by following formula, calculate,
Wherein, a1 can be positive number, and for example a1 can equal 0.2,1,2,3,4.5,8.3 or other values.
Wherein, work as M ibe greater than other elements in coupling matrix M atch (M for example iequal a1), think that the above-mentioned moving target Motion mask i concentrated with above-mentioned Motion mask mates.
Optionally, in possible embodiments more of the present invention, (for example if above-mentioned, based on above-mentioned matching degree, determine after the first Motion mask mating with above-mentioned moving target, matching degree between above-mentioned moving target and the first Motion mask is greater than the matching degree of other Motion masks that above-mentioned moving target and above-mentioned Motion mask concentrate), the predictive displacement (dx, dy) that processor 401 also can be used for comprising in above-mentioned the first Motion mask is upgraded as follows:
dx = x j - x i dy = y j - y i , (x wherein j, y j) be the reference point coordinate of above-mentioned moving target, (x i, y i) be the reference point coordinate of movement locus array up-to-date interpolation before the reference point coordinate that adds above-mentioned moving target of above-mentioned the first Motion mask.
Optionally, in possible embodiments more of the present invention, above-mentioned based on above-mentioned matching degree, determine the first Motion mask mating with above-mentioned moving target after, the degree of confidence Threshold_In1 that processor 401 also can be used for above-mentioned the first Motion mask recording in above-mentioned the first Motion mask to become motion tracking template adds s1, and above-mentioned s1 is positive integer.It is 1,2,3,4,6 or 8,10,20 or 51 or other positive integers that above-mentioned s1 for example can equal.
Concentrate the degree of confidence Threshold_In1 that becomes motion tracking template recording in other Motion masks except above-mentioned the first Motion mask to subtract s1 above-mentioned Motion mask, the degree of confidence Threshold_In2 that the Motion mask recording in above-mentioned other Motion masks disappears subtracts s2, and above-mentioned s2 is positive integer.It is 1,2,3,4,6,8,10,20 or 51 or other positive integers that above-mentioned s2 for example can equal.
Further, the degree of confidence Threshold_In2 that the Motion mask that certain Motion mask of concentrating when above-mentioned Motion mask records disappears is less than or equal to S22 (S22 is for example less than or equal to 0), this Motion mask can be concentrated and is rejected from above-mentioned Motion mask.
Further, processor 401 also can utilize the area of above-mentioned moving target to upgrade the area of the moving target recording in above-mentioned the first Motion mask, also can utilize the position of above-mentioned moving target to upgrade the position of the moving target recording in above-mentioned the first Motion mask.
Optionally, in some possible embodiments of the present invention, in the above-mentioned movement locus array based in above-mentioned the first Motion mask of processor 401, the reference point coordinate of current record is determined the movement locus of above-mentioned moving target, can comprise:
Calculate the direction gradient between adjacent two reference point coordinates in P1 the reference point coordinate recording the earliest in the movement locus array in above-mentioned the first Motion mask, to obtain P1-1 direction gradient, calculate the angle of the adjacent direction gradient in an above-mentioned P1-1 direction gradient, to obtain P1-2 angle, above-mentioned P1 is greater than 2 positive integer.
Calculate the direction gradient between adjacent two reference point coordinates in P2 the reference point coordinate recording the latest in the movement locus array in above-mentioned the first Motion mask, to obtain P2-1 direction gradient, calculate the angle of the adjacent direction gradient in an above-mentioned P2-1 direction gradient, to obtain P2-2 angle, above-mentioned P2 is greater than 2 positive integer.
If be greater than the angle quantity of the first angle threshold value in an above-mentioned P1-2 angle, be greater than P3, if and the angle quantity that is greater than the second angle threshold value in an above-mentioned P2-2 angle is greater than P4, and the area of the corresponding movement locus of the reference point coordinate region of the record of the movement locus array in above-mentioned the first Motion mask is greater than the first area threshold, utilize the reference point coordinate of current record in the movement locus array in above-mentioned the first Motion mask to draw the movement locus that obtains above-mentioned moving target.
Wherein, two adjacent reference point coordinates of time that in movement locus array in the first Motion mask, adjacent two reference point coordinates refer to the movement locus array adding in the first Motion mask, and adjacent direction gradient refers to the both direction gradient that three the adjacent reference point coordinates of time based on adding the movement locus array in the first Motion mask calculate.For example, reference point coordinate f1, f2 and f2 are three reference point coordinates adjacent in the movement locus array in the first Motion mask, direction gradient between hypothetical reference point coordinate f1 and f2 is f1_2, direction gradient between reference point coordinate f2 and f3 is f2_3, and direction gradient is that f2_3 and direction gradient are that f1_2 is adjacent direction gradient.For ease of record, add the more late reference point coordinate of time of the movement locus array in Motion mask (the first Motion mask), the numbering in its movement locus array in this Motion mask is just larger or less.
Wherein, the corresponding movement locus of the reference point coordinate region of movement locus array in the first Motion mask record, can be the maximum circumscribed rectangular region at the corresponding movement locus of the reference point coordinate place of the movement locus array record in the first Motion mask.
Wherein, the span of the first angle threshold value for example can be [0 °, 180 °], and preferably span for example can be [30 °, 90 °], specifically can equal 30 °, 38 °, 45 °, 60 °, 70 °, 90 ° or other angles.
Wherein, the span of the second angle threshold value for example can be [0 °, 180 °], and preferably span for example can be [30 °, 90 °], specifically can equal 30 °, 38 °, 48 °, 65 °, 77 °, 90 ° or other angles.
Wherein, the span of the first area threshold for example can be [10,50], and concrete example is as equaled 10,15 or 21,25,30,35,40,44,48 or 50 or other values.
Wherein, suppose X minminimum X coordinate figure in the reference point coordinate of expression movement locus array record, X maxmaximum X coordinate figure in the reference point coordinate of expression movement locus array record.Y minminimum Y coordinate figure in the reference point coordinate of expression movement locus array record, Y maxmaximum Y coordinate figure in the reference point coordinate of expression movement locus array record.Wherein, (X min, Y min) and (X max, Y max) between Euclidean distance, can be the area of the corresponding movement locus of the reference point coordinate region of movement locus array in above-mentioned the first Motion mask record.
Wherein, the angle of both direction gradient obtains by both direction gradient folder cosine of an angle, and the folder cosine of an angle of direction gradient equals the inner product of direction gradient divided by the product of the mould of two direction gradients, and specific formula for calculation can be as follows:
Angle i = a cos ( ( dx i + 1 * dx i + dy i + 1 * dy i ) / ( dx i + 1 2 + dy i + 1 2 * dx i 2 + dy i 2 ) ) * 180 / &pi; ;
Wherein, (dx i, dy i) be current direction gradient, (dx i+1, dy i+1)for adjacent next direction gradient.
Wherein, P1 and P2 for example can equal 10,12,15,8, and P3 can equal 6 or 5 or 4, P4 can equal 3 or 4,5,6.If Motion mask does not have locked, the reference position of tracing point is added to 2, initial two points of current calculating location have been skipped in the starting position of the tracing point when next calculated direction gradient, that is to say StartPos=StartPos+2.
Optionally, in some possible embodiments of the present invention, point coordinate or center of mass point coordinate or other point coordinate centered by above-mentioned reference point coordinate.For example the reference point coordinate of moving target can be the center point coordinate of moving target or other point coordinate on center of mass point coordinate or moving target.
If be appreciated that in foreground picture and comprise a plurality of moving targets, each moving target all can be followed the tracks of in the manner described above.
Be understandable that, the function of each functional module of the motion target tracking device 400 of the present embodiment can be according to the method specific implementation in said method embodiment, and its specific implementation process can, with reference to the associated description of said method embodiment, repeat no more herein.
Wherein, motion target tracking device 400 can be for example mobile phone, panel computer, PC, notebook computer, video camera, monitor etc. equipment.
Can find out, in the present embodiment, motion target tracking device 400 is carrying out after morphology processing the foreground picture of image; The above-mentioned foreground picture carrying out after morphology processing is carried out to the moving target that connected region extraction operation comprises to obtain above-mentioned foreground picture; Based on Motion mask corresponding to above-mentioned moving target, determine the movement locus of above-mentioned moving target.Owing to being that such scheme is to take moving target integral body to carry out image tracing as granularity, need to using each pixel with prior art and as granularity, follow the tracks of computing and compare, technique scheme of the present invention is conducive to the computation complexity of larger reduction pursuit movement target.
The embodiment of the present invention also provides a kind of computer-readable storage medium, and wherein, this computer-readable storage medium can have program stored therein, and this program comprises the part or all of step of the motion target tracking method of recording in said method embodiment while carrying out.
It should be noted that, for aforesaid each embodiment of the method, for simple description, therefore it is all expressed as to a series of combination of actions, but those skilled in the art should know, the present invention is not subject to the restriction of described sequence of movement, because according to the present invention, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in instructions all belongs to preferred embodiment, and related action and module might not be that the present invention is necessary.
In the above-described embodiments, the description of each embodiment is all emphasized particularly on different fields, in certain embodiment, there is no the part of detailed description, can be referring to the associated description of other embodiment.
In the several embodiment that provide in the application, should be understood that disclosed device can be realized by another way.For example, device embodiment described above is only schematic, the for example division of said units, be only that a kind of logic function is divided, during actual realization, can there is other dividing mode, for example a plurality of unit or assembly can in conjunction with or can be integrated into another system, or some features can ignore, or do not carry out.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, indirect coupling or the communication connection of device or unit can be electrical or other form.
The above-mentioned unit as separating component explanation can or can not be also physically to separate, and the parts that show as unit can be or can not be also physical locations, can be positioned at a place, or also can be distributed in a plurality of network element.Can select according to the actual needs some or all of unit wherein to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can be also that the independent physics of unit exists, and also can be integrated in a unit two or more unit.Above-mentioned integrated unit both can adopt the form of hardware to realize, and also can adopt the form of SFU software functional unit to realize.
If the form of SFU software functional unit of usining above-mentioned integrated unit realizes and during as production marketing independently or use, can be stored in a computer read/write memory medium.Understanding based on such, the all or part of of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprises that some instructions are used so that a computer equipment (can be personal computer, server or the network equipment etc.) is carried out all or part of step of each embodiment said method of the present invention.And aforesaid storage medium comprises: USB flash disk, ROM (read-only memory) (ROM, Read-Only Memory), the various media that can be program code stored such as random access memory (RAM, Random Access Memory), portable hard drive, magnetic disc or CD.
Above-mentioned above, above embodiment only, in order to technical scheme of the present invention to be described, is not intended to limit; Although the present invention is had been described in detail with reference to previous embodiment, those of ordinary skill in the art is to be understood that: its technical scheme that still can record aforementioned each embodiment is modified, or part technical characterictic is wherein equal to replacement; And these modifications or replacement do not make the essence of appropriate technical solution depart from the spirit and scope of various embodiments of the present invention technical scheme.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the schematic flow sheet of a kind of motion target tracking method of providing of the embodiment of the present invention;
Fig. 2 is the schematic flow sheet of the another kind of motion target tracking method that provides of the embodiment of the present invention;
Fig. 3 is the schematic diagram of a kind of motion target tracking device of providing of the embodiment of the present invention;
Fig. 4 is the schematic diagram of a kind of motion target tracking device of providing of the embodiment of the present invention.
Embodiment
The embodiment of the present invention provides motion target tracking method and device, to reducing the computation complexity of pursuit movement target.
In order to make those skilled in the art person understand better the present invention program, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the embodiment of a part of the present invention, rather than whole embodiment.Embodiment based in the present invention, those of ordinary skills, not making the every other embodiment obtaining under creative work prerequisite, should belong to the scope of protection of the invention.
Below be elaborated respectively.

Claims (23)

1. a motion target tracking method, is characterized in that, comprising:
From video sequence, obtain image;
Obtain the foreground picture of described image;
Described foreground picture is carried out to morphology processing;
The described foreground picture carrying out after morphology processing is carried out to the moving target that connected region extraction operation comprises to obtain described foreground picture; Based on Motion mask corresponding to described moving target, determine the movement locus of described moving target.
2. method according to claim 1, it is characterized in that, describedly described foreground picture is carried out to morphology process and to comprise: described foreground picture is carried out to following at least one morphology and process: filtering processing, dilation erosion are processed, opening operation is processed and closed operation is processed.
3. method according to claim 1 and 2, is characterized in that, the described movement locus of determining described moving target based on Motion mask corresponding to described moving target, comprising:
Calculate the Euclidean distance between each concentrated Motion mask of described moving target and Motion mask;
Described Euclidean distance between each Motion mask of concentrating based on the described described moving target calculating and Motion mask, determines the matching degree between each Motion mask among described moving target and described Motion mask collection;
If determine the first Motion mask mating with described moving target based on described matching degree, the reference point coordinate of described moving target is added in the movement locus array in described the first Motion mask, in movement locus array based in described the first Motion mask, the reference point coordinate of current record is determined the movement locus of described moving target, wherein, described the first Motion mask is one of them Motion mask among described Motion mask collection.
4. method according to claim 3, is characterized in that,
Described method also comprises:
If determining described moving target any one Motion mask concentrated with described Motion mask based on described matching degree does not all mate, generate the first Motion mask that described moving target is corresponding, described the first Motion mask is added among described Motion mask collection, wherein, in the movement locus array of described the first Motion mask, recorded the reference point coordinate of described moving target.
5. according to the method described in claim 3 or 4, it is characterized in that, the Euclidean distance between each Motion mask that the described moving target of described calculating and Motion mask are concentrated, comprising:
Utilize Motion mask collection and described moving target structure Distance matrix D ist;
Dist = D 1 D 2 . . . D m ;
Wherein, the line number m of described distance matrix equals the number of the current Motion mask comprising of described Motion mask collection; Element D in described Dist irepresent the Euclidean distance between the concentrated Motion mask i of described moving target and described Motion mask.
6. method according to claim 5, is characterized in that,
Described D iby following formula, calculate,
D i = ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 , if ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 < T T , other
Wherein, described T is first threshold;
Described (x j, y j) be the reference point coordinate of described moving target;
Described (x i, y i) be the reference point coordinate of the concentrated Motion mask i of described Motion mask;
Described (dx, dy) predictive displacement for recording in described Motion mask i.
7. according to the method described in claim 5 or 6, it is characterized in that, described Euclidean distance between each Motion mask that described described moving target based on calculating and Motion mask are concentrated, determines the matching degree between each concentrated Motion mask of described moving target and Motion mask, comprising:
Based on described distance matrix structure coupling matrix M atch;
Match = M 1 M 2 . . . M m
Wherein, the line number m of described coupling matrix equals the number of the current Motion mask comprising of described Motion mask collection; Element M in described Match irepresent the matching degree between the concentrated Motion mask i of described moving target and described Motion mask, wherein, matching degree between the concentrated Motion mask i of described moving target and described Motion mask, determines based on Euclidean distance between described moving target and the concentrated Motion mask i of described Motion mask.
8. method according to claim 7, is characterized in that,
Described M iby following formula, calculate,
9. according to the method described in claim 3 to 8 any one, it is characterized in that, after determining based on described matching degree the first Motion mask mating with described moving target if described, also comprise: the predictive displacement (dx, dy) comprising in described the first Motion mask is upgraded as follows:
dx = x i 1 - x i dy = y i 1 - y i , (x wherein i1, y i1) be the reference point coordinate of described moving target, (x i, y i) be the reference point coordinate of movement locus array up-to-date interpolation before the reference point coordinate that adds described moving target of described the first Motion mask.
10. according to the method described in claim 3 to 9 any one, it is characterized in that, described based on described matching degree, determine the first Motion mask mating with described moving target after,
Described method also comprises:
The degree of confidence Threshold_In1 that described the first Motion mask recording in described the first Motion mask is become to motion tracking template adds s1;
Concentrate the degree of confidence Threshold_In1 that becomes motion tracking template recording in other Motion masks except described the first Motion mask to subtract s1 described Motion mask, the degree of confidence Threshold_In2 that the Motion mask recording in described other Motion masks disappears subtracts s2, wherein, described s1 is positive integer, and described s2 is positive integer.
11. according to the method described in claim 3 to 10 any one, it is characterized in that, in the described movement locus array based in described the first Motion mask, the reference point coordinate of current record is determined the movement locus of described moving target, comprising:
Calculate the direction gradient between adjacent two reference point coordinates in P1 the reference point coordinate recording the earliest in the movement locus array in described the first Motion mask, to obtain P1-1 direction gradient, calculate the angle of the adjacent direction gradient in a described P1-1 direction gradient, to obtain P1-2 angle, described P1 is greater than 2 positive integer;
Calculate the direction gradient between adjacent two reference point coordinates in P2 the reference point coordinate recording the latest in the movement locus array in described the first Motion mask, to obtain P2-1 direction gradient, calculate the angle of the adjacent direction gradient in a described P2-1 direction gradient, to obtain P2-2 angle, described P2 is greater than 2 positive integer;
If be greater than the angle quantity of the first angle threshold value in a described P1-2 angle, be greater than P3, if and the angle quantity that is greater than the second angle threshold value in a described P2-2 angle is greater than P4, and the area of the corresponding movement locus of the reference point coordinate region of the record of the movement locus array in described the first Motion mask is greater than the first area threshold, utilize the reference point coordinate of current record in the movement locus array in described the first Motion mask to draw the movement locus that obtains described moving target.
12. according to the method described in claim 3 to 11 any one, it is characterized in that point coordinate or center of mass point coordinate centered by described reference point coordinate.
13. 1 kinds of motion target tracking devices, is characterized in that, comprising:
Acquiring unit, for obtaining image from video sequence;
Obtain unit, for obtaining the foreground picture of described image;
Processing unit, for carrying out morphology processing to described foreground picture;
Extraction unit, for carrying out to the described foreground picture carrying out after morphology processing the moving target that connected region extraction operation comprises to obtain described foreground picture;
Tracking treatment unit, for determining the movement locus of described moving target based on Motion mask corresponding to described moving target.
14. devices according to claim 13, is characterized in that,
Described tracking treatment unit specifically for, calculate the Euclidean distance between each Motion mask that described moving target and Motion mask concentrate; Described Euclidean distance between each Motion mask of concentrating based on the described described moving target calculating and Motion mask, determines the matching degree between each Motion mask among described moving target and described Motion mask collection; If determine the first Motion mask mating with described moving target based on described matching degree, the reference point coordinate of described moving target is added in the movement locus array in described the first Motion mask, in movement locus array based in described the first Motion mask, the reference point coordinate of current record is determined the movement locus of described moving target, wherein, described the first Motion mask is one of them Motion mask among described Motion mask collection.
15. devices according to claim 14, is characterized in that,
Described tracking treatment unit also for, if determining described moving target any one Motion mask concentrated with described Motion mask based on described matching degree does not all mate, generate the first Motion mask that described moving target is corresponding, described the first Motion mask is added among described Motion mask collection, wherein, in the movement locus array of described the first Motion mask, recorded the reference point coordinate of described moving target.
16. according to the device described in claims 14 or 15, it is characterized in that, aspect Euclidean distance between each Motion mask of concentrating at the described moving target of described calculating and Motion mask, described tracking treatment unit specifically for, utilize Motion mask collection and described moving target structure Distance matrix D ist;
Dist = D 1 D 2 . . . D m ;
Wherein, the line number m of described distance matrix equals the number of the current Motion mask comprising of described Motion mask collection; Element D in described Dist irepresent the Euclidean distance between the concentrated Motion mask i of described moving target and described Motion mask.
17. devices according to claim 16, is characterized in that,
Described D iby following formula, calculate,
D i = ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 , if ( x i + dx - x j ) 2 + ( y i + dy - y j ) 2 < T T , other
Wherein, described T is first threshold;
Described (x j, y j) be the reference point coordinate of described moving target;
Described (x i, y i) be the reference point coordinate of the concentrated Motion mask i of described Motion mask;
Described (dx, dy) predictive displacement for recording in described Motion mask i.
18. according to the device described in claim 16 or 17, it is characterized in that, described Euclidean distance between each Motion mask of concentrating at described described moving target based on calculating and Motion mask, determine the aspect of the matching degree between each concentrated Motion mask of described moving target and Motion mask, described tracking treatment unit specifically for, based on described distance matrix structure coupling matrix M atch;
Match = M 1 M 2 . . . M m
Wherein, the line number m of described coupling matrix equals the number of the current Motion mask comprising of described Motion mask collection; Element M in described Match irepresent the matching degree between the concentrated Motion mask i of described moving target and described Motion mask, wherein, matching degree between the concentrated Motion mask i of described moving target and described Motion mask, determines based on Euclidean distance between described moving target and the concentrated Motion mask i of described Motion mask.
19. devices according to claim 18, is characterized in that,
Described M iby following formula, calculate,
20. according to claim 14 to the device described in 19 any one, it is characterized in that, after determining based on described matching degree the first Motion mask mating with described moving target if described, described tracking treatment unit also for, the predictive displacement (dx, dy) comprising in described the first Motion mask is upgraded as follows:
dx = x i 1 - x i dy = y i 1 - y i , (x wherein i1, y i1) be the reference point coordinate of described moving target, (x i, y i) be the reference point coordinate of movement locus array up-to-date interpolation before the reference point coordinate that adds described moving target of described the first Motion mask.
21. according to claim 14 to the device described in 20 any one, it is characterized in that, described based on described matching degree, determine the first Motion mask mating with described moving target after, described tracking treatment unit also for, the degree of confidence Threshold_In1 that described the first Motion mask recording in described the first Motion mask is become to motion tracking template adds s1;
Concentrate the degree of confidence Threshold_In1 that becomes motion tracking template recording in other Motion masks except described the first Motion mask to subtract s1 described Motion mask, the degree of confidence Threshold_In2 that the Motion mask recording in described other Motion masks disappears subtracts s2, wherein, described s1 is positive integer, and described s2 is positive integer.
22. according to claim 14 to the device described in 21 any one, it is characterized in that, in the described movement locus array based in described the first Motion mask the reference point coordinate of current record determine described moving target movement locus aspect, described tracking treatment unit specifically for
Calculate the direction gradient between adjacent two reference point coordinates in P1 the reference point coordinate recording the earliest in the movement locus array in described the first Motion mask, to obtain P1-1 direction gradient, calculate the angle of the adjacent direction gradient in a described P1-1 direction gradient, to obtain P1-2 angle, described P1 is greater than 2 positive integer;
Calculate the direction gradient between adjacent two reference point coordinates in P2 the reference point coordinate recording the latest in the movement locus array in described the first Motion mask, to obtain P2-1 direction gradient, calculate the angle of the adjacent direction gradient in a described P2-1 direction gradient, to obtain P2-2 angle, described P2 is greater than 2 positive integer;
If be greater than the angle quantity of the first angle threshold value in a described P1-2 angle, be greater than P3, if and the angle quantity that is greater than the second angle threshold value in a described P2-2 angle is greater than P4, and the area of the corresponding movement locus of the reference point coordinate region of the record of the movement locus array in described the first Motion mask is greater than the first area threshold, utilize the reference point coordinate of current record in the movement locus array in described the first Motion mask to draw the movement locus that obtains described moving target.
23. according to claim 14 to the device described in 22 any one, it is characterized in that point coordinate or center of mass point coordinate centered by described reference point coordinate.
CN201410373576.2A 2014-07-31 2014-07-31 Motion target tracking method and device Active CN104156982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410373576.2A CN104156982B (en) 2014-07-31 2014-07-31 Motion target tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410373576.2A CN104156982B (en) 2014-07-31 2014-07-31 Motion target tracking method and device

Publications (2)

Publication Number Publication Date
CN104156982A true CN104156982A (en) 2014-11-19
CN104156982B CN104156982B (en) 2017-06-13

Family

ID=51882471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410373576.2A Active CN104156982B (en) 2014-07-31 2014-07-31 Motion target tracking method and device

Country Status (1)

Country Link
CN (1) CN104156982B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105025198A (en) * 2015-07-22 2015-11-04 东方网力科技股份有限公司 Space-time-factor-based grouping method for video moving objects
CN108986151A (en) * 2017-05-31 2018-12-11 华为技术有限公司 A kind of multiple target tracking processing method and equipment
CN110245611A (en) * 2019-06-14 2019-09-17 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN110636248A (en) * 2018-06-22 2019-12-31 华为技术有限公司 Target tracking method and device
WO2021170030A1 (en) * 2020-02-28 2021-09-02 华为技术有限公司 Method, device, and system for target tracking
CN113837143A (en) * 2021-10-21 2021-12-24 广州微林软件有限公司 Action recognition method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105025198A (en) * 2015-07-22 2015-11-04 东方网力科技股份有限公司 Space-time-factor-based grouping method for video moving objects
CN108986151A (en) * 2017-05-31 2018-12-11 华为技术有限公司 A kind of multiple target tracking processing method and equipment
CN108986151B (en) * 2017-05-31 2021-12-03 华为技术有限公司 Multi-target tracking processing method and equipment
CN110636248A (en) * 2018-06-22 2019-12-31 华为技术有限公司 Target tracking method and device
CN110636248B (en) * 2018-06-22 2021-08-27 华为技术有限公司 Target tracking method and device
CN110245611A (en) * 2019-06-14 2019-09-17 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN110245611B (en) * 2019-06-14 2021-06-15 腾讯科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
WO2021170030A1 (en) * 2020-02-28 2021-09-02 华为技术有限公司 Method, device, and system for target tracking
CN113837143A (en) * 2021-10-21 2021-12-24 广州微林软件有限公司 Action recognition method

Also Published As

Publication number Publication date
CN104156982B (en) 2017-06-13

Similar Documents

Publication Publication Date Title
Li et al. LasHeR: A large-scale high-diversity benchmark for RGBT tracking
Fernandez-Sanjurjo et al. Real-time visual detection and tracking system for traffic monitoring
Bosquet et al. STDnet: Exploiting high resolution feature maps for small object detection
Jia et al. Segment, magnify and reiterate: Detecting camouflaged objects the hard way
CN104156982A (en) Moving object tracking method and device
Kalantar et al. Multiple moving object detection from UAV videos using trajectories of matched regional adjacency graphs
Li et al. Distortion-Adaptive Salient Object Detection in 360$^\circ $ Omnidirectional Images
Zhang et al. Visual tracking using Siamese convolutional neural network with region proposal and domain specific updating
Jiao et al. Real-time lane detection and tracking for autonomous vehicle applications
Yang et al. Intelligent video analysis: A Pedestrian trajectory extraction method for the whole indoor space without blind areas
Viguier et al. Automatic video content summarization using geospatial mosaics of aerial imagery
Chandler et al. Mitigation of effects of occlusion on object recognition with deep neural networks through low-level image completion
Alletto et al. Self-supervised optical flow estimation by projective bootstrap
Haggui et al. Centroid human tracking via oriented detection in overhead fisheye sequences
Yang et al. TGAN: A simple model update strategy for visual tracking via template-guidance attention network
CN117152206A (en) Multi-target long-term tracking method for unmanned aerial vehicle
Annunziata et al. Destnet: Densely fused spatial transformer networks
Yan et al. An antijamming and lightweight ship detector designed for spaceborne optical images
Zhang et al. Augmented visual feature modeling for matching in low-visibility based on cycle-labeling of Superpixel Flow
Tian et al. High confidence detection for moving target in aerial video
CN111986233A (en) Large-scene minimum target remote sensing video tracking method based on feature self-learning
Gao Automatic detection, segmentation and tracking of vehicles in wide-area aerial imagery
Jiang et al. Multi-camera calibration free bev representation for 3d object detection
Abdein et al. Self-supervised learning of optical flow, depth, camera pose and rigidity segmentation with occlusion handling
Dai et al. OAMatcher: An overlapping areas-based network with label credibility for robust and accurate feature matching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant