CN104156982B - Motion target tracking method and device - Google Patents
Motion target tracking method and device Download PDFInfo
- Publication number
- CN104156982B CN104156982B CN201410373576.2A CN201410373576A CN104156982B CN 104156982 B CN104156982 B CN 104156982B CN 201410373576 A CN201410373576 A CN 201410373576A CN 104156982 B CN104156982 B CN 104156982B
- Authority
- CN
- China
- Prior art keywords
- motion mask
- motion
- moving target
- mentioned
- mask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the present invention provides motion target tracking method and device.A kind of motion target tracking method can include:Image is obtained from video sequence;Obtain the foreground picture of described image;Morphological scale-space is carried out to the foreground picture;Connected region extraction operation is carried out to obtain the moving target that the foreground picture is included to carrying out the foreground picture after Morphological scale-space;The movement locus of the moving target is determined based on the corresponding Motion mask of the moving target.The embodiment of the present invention provides the computation complexity that technical scheme advantageously reduces pursuit movement target.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to motion target tracking method and device.
Background technology
Intelligent video monitoring is technically related to Computer Image Processing, Computer Vision, pattern-recognition and people
The numerous areas such as work intelligence, thus with stronger researching value, and it is using extremely wide, such as commercial hotel, cell, building
Space, the monitoring in market;The monitoring of medical treatment, airport, station, traffic scene in government utility;The weapon based on video is accurate in military affairs
Take aim at system etc..
The core of intelligent video monitoring is moving object detection and tracking method.The quality of evaluation objective tracking system is
Moving target under real-time and accuracy two indices, detection and tracking complex scene, is main in motion target tracking
Difficult point.
Existing motion target tracking method mainly has an optical flow method and grain filter method etc., but existing motion target tracking side
Method is required for each pixel of each image in analytical calculation video sequence respectively substantially, and this causes that the calculating of prior art is answered
Miscellaneous degree is very high.
The content of the invention
The embodiment of the present invention provides motion target tracking method and device, to reduce the calculating of pursuit movement target
Complexity.
First aspect present invention provides a kind of motion target tracking method, it may include:
Image is obtained from video sequence;
Obtain the foreground picture of described image;
Morphological scale-space is carried out to the foreground picture;
Connected region extraction operation is carried out to obtain the foreground picture bag to carrying out the foreground picture after Morphological scale-space
The moving target for containing;The movement locus of the moving target is determined based on the corresponding Motion mask of the moving target.
It is described that the foreground picture is carried out with reference in a first aspect, in the first possible implementation method of first aspect
Morphological scale-space includes:Following at least one Morphological scale-space is carried out to the foreground picture:Filtering process, dilation erosion treatment,
Opening operation treatment and closed operation treatment.
With reference to the first possible implementation method of first aspect or first aspect, second in first aspect is possible
In implementation method, the movement locus that the moving target is determined based on the corresponding Motion mask of the moving target, including:
Calculate the Euclidean distance between each Motion mask that the moving target and Motion mask are concentrated;
It is described European between each Motion mask concentrated based on the moving target for calculating and Motion mask
Distance, determines the matching degree between each Motion mask among the moving target and the Motion mask collection;
If determining the first Motion mask matched with the moving target based on the matching degree, by the moving target
Reference point coordinates be added in the movement locus array in first Motion mask, based in first Motion mask
The reference point coordinates of current record in movement locus array determines the movement locus of the moving target, wherein, first fortune
Moving platen is the one of Motion mask among the Motion mask collection.
With reference to second possible implementation method of first aspect, in the third possible implementation method of first aspect
In, methods described also includes:
If determining any one motion mould that the moving target is concentrated with the Motion mask based on the matching degree
Plate is mismatched, and generates corresponding first Motion mask of the moving target, and first Motion mask is added into the fortune
Among moving platen collection, wherein, the reference point of the moving target is have recorded in the movement locus array of first Motion mask
Coordinate.
In with reference to second possible implementation method of first aspect or the third possible implementation method of first aspect,
It is described to calculate each motion that the moving target is concentrated with Motion mask in the 4th kind of possible implementation method of first aspect
Euclidean distance between template, including:
Using Motion mask collection and moving target construction Distance matrix D ist;
Wherein, the line number m of the distance matrix is equal to the number of the current Motion mask for including of the Motion mask collection;Institute
State the element D in DistiRepresent the Euclidean distance between the Motion mask i that the moving target and the Motion mask are concentrated.
With reference to the 4th kind of possible implementation method of first aspect, in the 5th kind of possible implementation method of first aspect
In, the DiIt is calculated by equation below,
Wherein, the T is first threshold;
(the xj,yj) be the moving target reference point coordinates;
(the xi,yi) be the Motion mask i that the Motion mask is concentrated reference point coordinates;
(dx, the dy) is the predictive displacement of record in the Motion mask i.
With reference to the 4th kind of possible implementation method or the 5th kind of possible implementation method of first aspect of first aspect,
It is described to be concentrated based on the moving target for calculating and Motion mask in 6th kind of possible implementation method of first aspect
The Euclidean distance between each Motion mask, determines between each Motion mask that the moving target and Motion mask are concentrated
Matching degree, including:
Based on distance matrix construction matching matrix Match;
Wherein, the line number m of the matching matrix is equal to the number of the current Motion mask for including of the Motion mask collection;Institute
State the element M in MatchiThe matching degree between the Motion mask i that the moving target and the Motion mask are concentrated is represented, its
In, the matching degree between the Motion mask i that the moving target and the Motion mask are concentrated, based on the moving target and institute
Euclidean distance determines between stating the Motion mask i of Motion mask concentration.
With reference to the 6th kind of possible implementation method of first aspect, in the 7th kind of possible implementation method of first aspect
In,
The MiIt is calculated by equation below,
With reference to second possible implementation method or the third possible implementation method of first aspect of first aspect
Or the 4th kind of possible implementation method or the 5th kind of possible implementation method or first of first aspect of first aspect
6th kind of possible implementation method of aspect or the 7th kind of possible implementation method of first aspect, the 8th of first aspect the
In the possible implementation method of kind, if described determine the first Motion mask matched with the moving target based on the matching degree
Afterwards, also include:Predictive displacement (dx, dy) to being included in first Motion mask is updated as follows:
Wherein (xi1,yi1) be the moving target reference point coordinates, (xi,yi) it is the described first fortune
The reference point coordinates of the movement locus array of moving platen newest addition before the reference point coordinates of the moving target is added.
With reference to second possible implementation method or the third possible implementation method of first aspect of first aspect
Or the 4th kind of possible implementation method or the 5th kind of possible implementation method or first of first aspect of first aspect
The of 6th kind of possible implementation method of aspect or the 7th kind of possible implementation method of first aspect or first aspect
Eight kinds of possible implementation methods, in the 9th kind of possible implementation method of first aspect, described true based on the matching degree
After making the first Motion mask matched with the moving target,
Methods described also includes:
First Motion mask that will be recorded in first Motion mask turns into the confidence level of motion tracking template
Threshold_In1 adds s1;
To be recorded in other Motion masks of Motion mask concentration in addition to first Motion mask as fortune
The confidence level Threshold_In1 of motion tracking template subtracts s1, and what the Motion mask recorded in described other Motion masks disappeared puts
Reliability Threshold_In2 subtracts s2, wherein, the s1 is positive integer, and the s2 is positive integer.
With reference to second possible implementation method or the third possible implementation method of first aspect of first aspect
Or the 4th kind of possible implementation method or the 5th kind of possible implementation method or first of first aspect of first aspect
The of 6th kind of possible implementation method of aspect or the 7th kind of possible implementation method of first aspect or first aspect
9th kind of possible implementation method of eight kinds of possible implementation methods or first aspect, the tenth kind in first aspect is possible
In implementation method, the reference point coordinates of current record in the movement locus array based in first Motion mask determines
The movement locus of the moving target, including:
P1 of earliest record in the movement locus array in first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P1-1 direction gradient, in the calculating P1-1 direction gradient
The angle of adjacent direction gradient, to obtain P1-2 angle, the P1 is the positive integer more than 2;
P2 recorded the latest in the movement locus array in first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P2-1 direction gradient, in the calculating P2-1 direction gradient
The angle of adjacent direction gradient, to obtain P2-2 angle, the P2 is the positive integer more than 2;
If the angle quantity in the P1-2 angle more than the first angle threshold value is more than P3, and if the P2-2 angle
In more than the second angle threshold value ginseng of the angle quantity more than the movement locus array record in P4, and first Motion mask
The area of the movement locus region corresponding to examination point coordinate is more than the first area threshold, then using first Motion mask
In movement locus array in the reference point coordinates of current record draw and obtain the movement locus of the moving target.
With reference to second possible implementation method or the third possible implementation method of first aspect of first aspect
Or the 4th kind of possible implementation method or the 5th kind of possible implementation method or first of first aspect of first aspect
The of 6th kind of possible implementation method of aspect or the 7th kind of possible implementation method of first aspect or first aspect
Tenth kind of possibility of the 9th kind of possible implementation method or first aspect of eight kinds of possible implementation methods or first aspect
Implementation method, in a kind of the tenth possible implementation method of first aspect, centered on the reference point coordinates point coordinates or
Barycenter point coordinates.
Second aspect present invention provides a kind of motion target tracking device, including:
Acquiring unit, for obtaining image from video sequence;
Obtaining unit, the foreground picture for obtaining described image;
Processing unit, for carrying out Morphological scale-space to the foreground picture;
Extraction unit, for carrying out connected region extraction operation to obtain to carrying out the foreground picture after Morphological scale-space
The moving target that the foreground picture is included;
Tracking treatment unit, the motion for determining the moving target based on the corresponding Motion mask of the moving target
Track.
With reference to second aspect, in the first possible implementation method of second aspect,
It is described Morphological scale-space is carried out to the foreground picture to include:Following at least one morphology is carried out to the foreground picture
Treatment:Filtering process, dilation erosion treatment, opening operation treatment and closed operation treatment.
With reference to the first possible implementation method of second aspect or second aspect, second in second aspect is possible
In implementation method,
The tracking treatment unit specifically for, calculate each Motion mask that the moving target concentrates with Motion mask it
Between Euclidean distance;Described between each Motion mask concentrated based on the moving target for calculating and Motion mask
Euclidean distance, determines the matching degree between each Motion mask among the moving target and the Motion mask collection;If being based on
The matching degree determines the first Motion mask matched with the moving target, and the reference point coordinates of the moving target is added
It is added in the movement locus array in first Motion mask, based in the movement locus array in first Motion mask
The reference point coordinates of current record determines the movement locus of the moving target, wherein, first Motion mask is the fortune
One of Motion mask among moving platen collection.
With reference to second possible implementation method of second aspect, in the third possible implementation method of second aspect
In,
The tracking treatment unit is additionally operable to, if determining the moving target with the motion mould based on the matching degree
Any one Motion mask that plate is concentrated is mismatched, and generates corresponding first Motion mask of the moving target, by described the
One Motion mask is added among the Motion mask collection, wherein, recorded in the movement locus array of first Motion mask
The reference point coordinates of the moving target.
In with reference to second possible implementation method of second aspect or the third possible implementation method of second aspect,
In the 4th kind of possible implementation method of second aspect,
The aspect of the Euclidean distance between each Motion mask for calculating the moving target and Motion mask concentration,
The tracking treatment unit is specifically for using Motion mask collection and moving target construction Distance matrix D ist;
Wherein, the line number m of the distance matrix is equal to the number of the current Motion mask for including of the Motion mask collection;Institute
State the element D in DistiRepresent the Euclidean distance between the Motion mask i that the moving target and the Motion mask are concentrated.
With reference to the 4th kind of possible implementation method of second aspect, in the 5th kind of possible implementation method of second aspect
In, the DiIt is calculated by equation below,
Wherein, the T is first threshold;
(the xj,yj) be the moving target reference point coordinates;
(the xi,yi) be the Motion mask i that the Motion mask is concentrated reference point coordinates;
(dx, the dy) is the predictive displacement of record in the Motion mask i.
With reference to the 4th kind of possible implementation method or the 5th kind of possible implementation method of second aspect of second aspect,
In 6th kind of possible implementation method of second aspect, concentrated based on the moving target for calculating and Motion mask described
Each Motion mask between the Euclidean distance, determine between each Motion mask that the moving target and Motion mask are concentrated
Matching degree aspect, the tracking treatment unit is specifically for based on distance matrix construction matching matrix Match;
Wherein, the line number m of the matching matrix is equal to the number of the current Motion mask for including of the Motion mask collection;Institute
State the element M in MatchiThe matching degree between the Motion mask i that the moving target and the Motion mask are concentrated is represented, its
In, the matching degree between the Motion mask i that the moving target and the Motion mask are concentrated, based on the moving target and institute
Euclidean distance determines between stating the Motion mask i of Motion mask concentration.
With reference to the 6th kind of possible implementation method of second aspect, in the 7th kind of possible implementation method of second aspect
In,
The MiIt is calculated by equation below,
With reference to second possible implementation method or the third possible implementation method of second aspect of second aspect
Or the 4th kind of possible implementation method or the 5th kind of possible implementation method or second of second aspect of second aspect
6th kind of possible implementation method of aspect or the 7th kind of possible implementation method of second aspect, the 8th of second aspect the
In the possible implementation method of kind,
If it is described determine the first Motion mask matched with the moving target based on the matching degree after, it is described with
Track processing unit is additionally operable to, and the predictive displacement (dx, dy) to being included in first Motion mask is updated as follows:
Wherein (xi1,yi1) be the moving target reference point coordinates, (xi,yi) it is the described first fortune
The reference point coordinates of the movement locus array of moving platen newest addition before the reference point coordinates of the moving target is added.
With reference to second possible implementation method or the third possible implementation method of second aspect of second aspect
Or the 4th kind of possible implementation method or the 5th kind of possible implementation method or second of second aspect of second aspect
The of 6th kind of possible implementation method of aspect or the 7th kind of possible implementation method of second aspect or second aspect
Eight kinds of possible implementation methods, in the 9th kind of possible implementation method of second aspect, described true based on the matching degree
Make after the first Motion mask matched with the moving target, the tracking treatment unit is additionally operable to, by the described first fortune
The confidence level Threshold_In1 that first Motion mask recorded in moving platen turns into motion tracking template adds s1;
To be recorded in other Motion masks of Motion mask concentration in addition to first Motion mask as fortune
The confidence level Threshold_In1 of motion tracking template subtracts s1, and what the Motion mask recorded in described other Motion masks disappeared puts
Reliability Threshold_In2 subtracts s2, wherein, the s1 is positive integer, and the s2 is positive integer.
With reference to second possible implementation method or the third possible implementation method of second aspect of second aspect
Or the 4th kind of possible implementation method or the 5th kind of possible implementation method or second of second aspect of second aspect
The of 6th kind of possible implementation method of aspect or the 7th kind of possible implementation method of second aspect or second aspect
9th kind of possible implementation method of eight kinds of possible implementation methods or second aspect, the tenth kind in second aspect is possible
In implementation method, the reference point coordinates of current record is true in the movement locus array based in first Motion mask
The aspect of the movement locus of the fixed moving target, the tracking treatment unit specifically for,
P1 of earliest record in the movement locus array in first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P1-1 direction gradient, in the calculating P1-1 direction gradient
The angle of adjacent direction gradient, to obtain P1-2 angle, the P1 is the positive integer more than 2;
P2 recorded the latest in the movement locus array in first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P2-1 direction gradient, in the calculating P2-1 direction gradient
The angle of adjacent direction gradient, to obtain P2-2 angle, the P2 is the positive integer more than 2;
If the angle quantity in the P1-2 angle more than the first angle threshold value is more than P3, and if the P2-2 angle
In more than the second angle threshold value ginseng of the angle quantity more than the movement locus array record in P4, and first Motion mask
The area of the movement locus region corresponding to examination point coordinate is more than the first area threshold, then using first Motion mask
In movement locus array in the reference point coordinates of current record draw and obtain the movement locus of the moving target.
With reference to second possible implementation method or the third possible implementation method of second aspect of second aspect
Or the 4th kind of possible implementation method or the 5th kind of possible implementation method or second of second aspect of second aspect
The of 6th kind of possible implementation method of aspect or the 7th kind of possible implementation method of second aspect or second aspect
Tenth kind of possibility of the 9th kind of possible implementation method or second aspect of eight kinds of possible implementation methods or second aspect
Implementation method, in a kind of the tenth possible implementation method of second aspect, centered on the reference point coordinates point coordinates or
Barycenter point coordinates.
As can be seen that in the present embodiment after the foreground picture to image carries out Morphological scale-space;To carrying out Morphological scale-space
The foreground picture afterwards carries out connected region and extracts operation to obtain the moving target that the foreground picture is included;Based on the motion
The corresponding Motion mask of target determines the movement locus of the moving target.Due to be such scheme be with moving target generally
Granularity carries out Object tracking, compared with prior art needs using each pixel to be tracked computing as granularity, the present invention
Above-mentioned technical proposal is conducive to the computation complexity of larger reduction pursuit movement target.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of motion target tracking method provided in an embodiment of the present invention;
Fig. 2 is the schematic flow sheet of another motion target tracking method provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of motion target tracking device provided in an embodiment of the present invention;
Fig. 4 is a kind of schematic diagram of motion target tracking device provided in an embodiment of the present invention.
Specific embodiment
The embodiment of the present invention provides motion target tracking method and device, to reduce the calculating of pursuit movement target
Complexity.
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention
Accompanying drawing, is clearly and completely described to the technical scheme in the embodiment of the present invention, it is clear that described embodiment is only
The embodiment of a part of the invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill people
The every other embodiment that member is obtained under the premise of creative work is not made, should all belong to the model of present invention protection
Enclose.
It is described in detail individually below.
Term " first ", " second ", " the 3rd " in description and claims of this specification and above-mentioned accompanying drawing and "
Four " it is etc. for distinguishing different objects, rather than for describing particular order.Additionally, term " comprising " and " having " and it
Any deformation, it is intended that covering non-exclusive is included.For example contain the process of series of steps or unit, method, be
System, product or equipment are not limited to the step of having listed or unit, but alternatively also include the step of not listing or list
Unit, or alternatively also include for these processes, method, product or other intrinsic steps of equipment or unit.
One embodiment of motion target tracking method of the present invention, wherein, a kind of motion target tracking method can include:
Image is obtained from video sequence;Obtain the foreground picture of above-mentioned image;Morphological scale-space is carried out to above-mentioned foreground picture;To carrying out shape
Above-mentioned foreground picture after state treatment carries out connected region and extracts operation to obtain the moving target that above-mentioned foreground picture is included;It is based on
The corresponding Motion mask of above-mentioned moving target determines the movement locus of above-mentioned moving target.
Referring to Fig. 1, a kind of flow of motion target tracking method that Fig. 1 is provided for one embodiment of the present of invention is illustrated
Figure.As shown in figure 1, a kind of motion target tracking that one embodiment of the present of invention is provided performs method may include herein below:
101st, image is obtained from video sequence.
102nd, the foreground picture of above-mentioned image is obtained.
Wherein, the foreground picture of image may include one or more moving targets, it is also possible to not comprising any motion mesh
Mark.It is main in the embodiment of the present invention to be illustrated by taking scene of the foreground picture comprising one or more moving targets as an example.
103rd, Morphological scale-space is carried out to above-mentioned foreground picture.
104th, connected region extraction operation is carried out to obtain above-mentioned prospect to carrying out the above-mentioned foreground picture after Morphological scale-space
The moving target that figure is included;The movement locus of above-mentioned moving target is determined based on the corresponding Motion mask of above-mentioned moving target.
As can be seen that in the present embodiment after the foreground picture to image carries out Morphological scale-space;To carrying out Morphological scale-space
Above-mentioned foreground picture afterwards carries out connected region and extracts operation to obtain the moving target that above-mentioned foreground picture is included;Based on above-mentioned motion
The corresponding Motion mask of target determines the movement locus of above-mentioned moving target.Due to being that such scheme is moving target generally grain
Spend to carry out Object tracking, compared with prior art needs each pixel to be tracked computing as granularity, the present invention is above-mentioned
Technical scheme is conducive to the computation complexity of larger reduction pursuit movement target.
Optionally, in some possible implementation methods of the invention, the foreground picture for obtaining above-mentioned image can for example be wrapped
Include:The Background of above-mentioned image is obtained, the foreground picture of above-mentioned image is obtained based on above-mentioned image and above-mentioned Background.Wherein, carry on the back
Scape figure does not include moving target, and Background can be divided into gray scale Background and color background figure.Wherein, foreground picture is will to move mesh
The image of mark, is such as all marked as black by non-athletic target area, and motion target area is all marked as into white.Example
As background modeling method can use mixed Gaussian background modeling algorithm, i.e., image is read from video sequence, using image and
Mixed Gaussian background modeling algorithm sets up Background.Image and Background are entered by background difference algorithm to image and Background
Row treatment obtains foreground picture, and the foreground picture of image may include one or more moving targets, it is also possible to not comprising any motion
Target.It is main in the embodiment of the present invention to be illustrated by taking scene of the foreground picture comprising one or more moving targets as an example.
Wherein, in some possible implementation methods of the invention, carrying out Morphological scale-space to above-mentioned foreground picture can wrap
Include:Following at least one Morphological scale-space is carried out to above-mentioned foreground picture:Filtering process, dilation erosion treatment, opening operation treatment and
Closed operation is processed.
If for example, foreground picture contains larger noise, treatment and dilation erosion treatment can be filtered to foreground picture.
Median filter process first can be carried out to remove noise to foreground picture, then to filtering process after foreground picture carry out at first expansion
Reason post-etching treatment, to remove the cavity of moving target, and the fragment of moving target is merged.Assuming that medium filtering template
It is the template of n*n, foreground picture is scanned using medium filtering template.Wherein.The preferred values of the n are odd number, n values example
Such as can 3,5,7,9,11 or 13.
When the pixel value of relevant position in the corresponding foreground picture of medium filtering template is following,
The n*n pixel value can be ranked up first, then choose intermediate value to replace
Change in foreground picture and correspond toThe pixel value of position.Such as medium filtering template is the medium filtering mould of 3*3
Plate, do the neighborhood territory pixel point of medium filtering, i.e., nine to have at least five pixels to the foreground picture of two-value is foreground point, and the point just can quilt
It is judged to foreground point, this mode can remove some independent small noises.
Optionally, expansion and corrosion can use the template of 3*3,Wherein origin is to be somebody's turn to do
Template center's point.Wherein, expansion is to do and computing the respective pixel value of template and foreground picture, if 0, origin corresponding two
The value of value picture position is entered as 0, is otherwise 255.Corrosion is that template is done and computing with the pixel value of corresponding bianry image,
If 255, the value of the corresponding bianry image position of origin is entered as 255, is otherwise 0.Certainly, under some application scenarios,
Also can be expanded using other modes and etching operation.
Optionally, in some possible implementation methods of the invention, based on the corresponding Motion mask of above-mentioned moving target
Determine the movement locus of above-mentioned moving target, can include:Calculate each motion mould that above-mentioned moving target is concentrated with Motion mask
Euclidean distance between plate;It is above-mentioned between each Motion mask concentrated based on the above-mentioned moving target for calculating and Motion mask
Euclidean distance, determines the matching degree between each Motion mask that above-mentioned moving target and Motion mask are concentrated, if being based on above-mentioned
Determine the first Motion mask for being matched with above-mentioned moving target with degree, the reference point coordinates of above-mentioned moving target is added on
In stating the movement locus array in the first Motion mask, based on currently remembering in the movement locus array in above-mentioned first Motion mask
The reference point coordinates of record determines the movement locus of above-mentioned moving target, wherein, above-mentioned first Motion mask is above-mentioned Motion mask
The one of Motion mask concentrated.
Additionally, above-mentioned motion target tracking method can also be further included:If determining above-mentioned fortune based on above-mentioned matching degree
Moving-target is mismatched with any one Motion mask that above-mentioned Motion mask is concentrated, and can generate above-mentioned moving target corresponding
First Motion mask, above-mentioned first Motion mask is added among above-mentioned Motion mask collection, wherein, above-mentioned first Motion mask
Movement locus array in have recorded the reference point coordinates of above-mentioned moving target.
Optionally, in some possible implementation methods of the invention, calculate above-mentioned moving target and concentrated with Motion mask
Each Motion mask between Euclidean distance, can include:
Using Motion mask collection and above-mentioned moving target construction Distance matrix D ist;
Wherein, the line number m of above-mentioned distance matrix is equal to the number of the current Motion mask for including of above-mentioned Motion mask collection;On
State the element D in DistiRepresent the Euclidean distance between the Motion mask i that above-mentioned moving target and above-mentioned Motion mask are concentrated.
Wherein, above-mentioned DiCan be for example calculated by equation below,
Wherein, above-mentioned (xi,yi) be the Motion mask i that above-mentioned Motion mask is concentrated reference point coordinates, above-mentioned (xj,yj)
It is the reference point coordinates of above-mentioned moving target, above-mentioned (dx, dy) is the predictive displacement of record in above-mentioned Motion mask i, and above-mentioned T is
First threshold, the span of first threshold T for example can be [20,150], specifically for example can for 20,25,30,35,40,50,
120th, 140 or 150 etc..
Optionally, in some possible implementation methods of the invention, based on the above-mentioned moving target for calculating and motion
The above-mentioned Euclidean distance between each Motion mask in template set, determines each motion that above-mentioned moving target is concentrated with Motion mask
Matching degree between template, can include:
Based on above-mentioned distance matrix construction matching matrix Match;
Wherein, the line number m of above-mentioned matching matrix Match is equal to the individual of the current Motion mask for including of above-mentioned Motion mask collection
Number;Element M in above-mentioned matching matrix MatchiRepresent Motion mask i that above-mentioned moving target concentrates with above-mentioned Motion mask it
Between matching degree, wherein, the matching degree between the Motion mask i that above-mentioned moving target and above-mentioned Motion mask are concentrated, based on upper
Euclidean distance determines between stating the Motion mask i that moving target and above-mentioned Motion mask are concentrated.
Optionally, in some possible implementation methods of the invention, above-mentioned MiIt is calculated by equation below,
Wherein, element MkRepresent the matching degree between the Motion mask k that above-mentioned moving target and above-mentioned Motion mask are concentrated.
Optionally, in other possible implementation methods of the invention, above-mentioned MiIt is calculated also by equation below,
Wherein, a1 can be positive number, and such as a1 can be equal to 0.2,1,2,3,4.5,8.3 or other values.
Wherein, M is worked asiMore than other elements (such as M in matching matrix MatchiEqual to a1), then it is assumed that above-mentioned motion mesh
The Motion mask i concentrated with above-mentioned Motion mask is marked to match.
Optionally, in some possible implementation methods of the invention, if it is above-mentioned based on above-mentioned matching degree determine with it is upper
State moving target matching the first Motion mask after (for example, when matching between above-mentioned moving target and the first Motion mask
Matching degree of the degree more than other Motion masks that above-mentioned moving target is concentrated with above-mentioned Motion mask), above-mentioned motion target tracking
Method can also be further included:Predictive displacement (dx, dy) to being included in above-mentioned first Motion mask is updated as follows:
Wherein (xj,yj) be above-mentioned moving target reference point coordinates, (xi,yi) it is the above-mentioned first fortune
The reference point coordinates of the movement locus array of moving platen newest addition before the reference point coordinates of above-mentioned moving target is added.
Optionally, in some possible implementation methods of the invention, it is above-mentioned based on above-mentioned matching degree determine with it is upper
State after the first Motion mask of moving target matching, above-mentioned motion target tracking method can also be further included:By above-mentioned
The confidence level Threshold_In1 that above-mentioned first Motion mask recorded in one Motion mask turns into motion tracking template adds s1,
Above-mentioned s1 is positive integer.Above-mentioned s1 can for example be equal to 1,2,3 or 4,6,8,10,20 or 51 or other positive integers.
To be recorded in other Motion masks of above-mentioned Motion mask concentration in addition to above-mentioned first Motion mask as fortune
The confidence level Threshold_In1 of motion tracking template subtracts s1, and what the Motion mask recorded in above-mentioned other Motion masks disappeared puts
Reliability Threshold_In2 subtracts s2, and above-mentioned s2 is positive integer.Above-mentioned s2 can for example be equal to 1,2,3,4,6,8,10,20 or 51
Or other positive integers.
Further, the confidence level that the Motion mask of certain Motion mask record concentrated when above-mentioned Motion mask disappears
Threshold_In2 is less than or equal to S22 (S22 e.g., less than or equal to 0), then can be by the Motion mask from above-mentioned Motion mask
Concentrate and reject.
Further, the motion recorded in above-mentioned first Motion mask can be also updated using the area of above-mentioned moving target
The area of target, can also update the moving target of record in above-mentioned first Motion mask using the position of above-mentioned moving target
Position.
Optionally, in some possible implementation methods of the invention, the above-mentioned motion based in above-mentioned first Motion mask
The reference point coordinates of current record in the array of track determines the movement locus of above-mentioned moving target, can include:
P1 of earliest record in the movement locus array in above-mentioned first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P1-1 direction gradient, in the above-mentioned P1-1 direction gradient of calculating
The angle of adjacent direction gradient, to obtain P1-2 angle, above-mentioned P1 is the positive integer more than 2.
P2 recorded the latest in the movement locus array in above-mentioned first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P2-1 direction gradient, in the above-mentioned P2-1 direction gradient of calculating
The angle of adjacent direction gradient, to obtain P2-2 angle, above-mentioned P2 is the positive integer more than 2.
If the angle quantity in above-mentioned P1-2 angle more than the first angle threshold value is more than P3, and if above-mentioned P2-2 angle
In more than the second angle threshold value ginseng of the angle quantity more than the movement locus array record in P4, and above-mentioned first Motion mask
The area of the movement locus region corresponding to examination point coordinate is more than the first area threshold, then using above-mentioned first Motion mask
In movement locus array in the reference point coordinates of current record draw and obtain the movement locus of above-mentioned moving target.
Wherein, two neighboring reference point coordinates refers to add first to move in the movement locus array in the first Motion mask
Temporally adjacent two of the movement locus array in template refer to point coordinates, and adjacent direction gradient refers to based on addition first
The both direction gradient that temporally adjacent three of the movement locus array in Motion mask are calculated with reference to point coordinates.Example
Such as, it is that adjacent in movement locus array in the first Motion mask three refer to point coordinates with reference to point coordinates f1, f2 and f2, it is false
It is f2_3 with reference to the direction gradient between point coordinates f2 and f3 if being f1_2 with reference to the direction gradient between point coordinates f1 and f2,
Direction gradient is f2_3 and direction gradient for f1_2 is adjacent direction gradient.For ease of record, Motion mask (the first fortune is added
Moving platen) in movement locus array time more late reference point coordinates, its movement locus array in the Motion mask
In numbering it is bigger or smaller.
Wherein, where the movement locus corresponding to the reference point coordinates of the movement locus array record in the first Motion mask
Region, can be where the movement locus corresponding to the reference point coordinates of the movement locus array record in the first Motion mask most
Big circumscribed rectangular region.
Wherein, the span of the first angle threshold value can be for example [0 °, 180 °], and preferably span for example can be
[30 °, 90 °], can specifically be equal to 30 °, 38 °, 45 °, 60 °, 70 °, 90 ° or other angles.
Wherein, the span of the second angle threshold value can be for example [0 °, 180 °], and preferably span for example can be
[30 °, 90 °], can specifically be equal to 30 °, 38 °, 48 °, 65 °, 77 °, 90 ° or other angles.
Wherein, the span of the first area threshold for example can be [10,50], specifically for example can be equal to 10,15 or 21,
25th, 30,35,40,44,48 or 50 or other values.
Where it is assumed that, XminRepresent the minimum X-coordinate value in the reference point coordinates of movement locus array record, XmaxTable
Show the maximum X-coordinate value in the reference point coordinates of movement locus array record.YminRepresent the reference of movement locus array record
Minimum Y-coordinate value in point coordinates, YmaxRepresent the maximum Y-coordinate value in the reference point coordinates of movement locus array record.
Wherein, (Xmin,Ymin) and (Xmax,Ymax) between Euclidean distance, can be above-mentioned first Motion mask in movement locus array
The area of the movement locus region corresponding to the reference point coordinates of record.
Wherein, the angle of both direction gradient presss from both sides cosine of an angle and obtains by both direction gradient, the angle of direction gradient
Cosine be equal to direction gradient inner product divided by the mould of two direction gradients product, specific formula for calculation can be as follows:
Wherein, (dxi,dyi) it is current direction gradient, (dxi+1,dyi+1) it is adjacent next direction gradient.
Wherein, P1 and P2 for example can be equal to 3 or 4,5,6 equal to 10,12,15,8, P3 equal to 6 or 5 or 4, P4.If fortune
Moving platen is not locked, then add 2 by the original position of tracing point, i.e., next calculated direction gradient when tracing point open
Beginning, position skipped initial two points of current calculated position, that is to say, that StartPos=StartPos+2.
Optionally, in some possible implementation methods of the invention, point coordinates or barycenter centered on above-mentioned reference point coordinates
Point coordinates or other point coordinates.The reference point coordinates of such as moving target can be that the center point coordinate or center of mass point of moving target are sat
Other point coordinates on mark or moving target.
If it is appreciated that comprising multiple moving targets in foreground picture, each moving target can enter in the manner described above
Line trace.
For ease of being better understood from and implementing the such scheme of the embodiment of the present invention, with reference to some concrete application scenes
Carry out citing introduction.
Referring to Fig. 2, Fig. 2 shows for the flow of another motion target tracking method that an alternative embodiment of the invention is provided
It is intended to.As shown in Fig. 2 an alternative embodiment of the invention provide another motion target tracking perform method may include it is following
Content:
201st, image is obtained from video sequence.
202nd, the Background of above-mentioned image is obtained.
203rd, the foreground picture of above-mentioned image is obtained based on above-mentioned image and above-mentioned Background.
Background does not include moving target, and Background can be divided into gray scale Background and color background figure.Foreground picture is to transport
The image of moving-target mark, is such as all marked as black by non-athletic target area, motion target area is all marked as white
Color.For example background modeling method can use mixed Gaussian background modeling algorithm, i.e., image is read from video sequence, using image
Background is set up with mixed Gaussian background modeling algorithm.Image and Background are by background difference algorithm to image and Background
Carry out treatment and obtain foreground picture,
Wherein, the foreground picture of image may include one or more moving targets, it is also possible to not comprising any motion mesh
Mark.Wherein, it is main in the present embodiment to be illustrated so that foreground picture is comprising the b scene of moving target as an example.
204th, Morphological scale-space is carried out to above-mentioned foreground picture.
Wherein, in some possible implementation methods of the invention, carrying out Morphological scale-space to above-mentioned foreground picture can wrap
Include:Following at least one Morphological scale-space is carried out to above-mentioned foreground picture:Filtering process, dilation erosion treatment, opening operation treatment and
Closed operation is processed.
If for example, foreground picture contains larger noise, treatment and dilation erosion treatment can be filtered to foreground picture.
Median filter process first can be carried out to remove noise to foreground picture, then to filtering process after foreground picture carry out at first expansion
Reason post-etching treatment, to remove the cavity of moving target, and the fragment of moving target is merged.Assuming that medium filtering template
It is the template of n*n, foreground picture is scanned using medium filtering template.Wherein.The preferred values of the n are odd number, n values example
Such as can 3,5,7,9,11 or 13.
When the pixel value of relevant position in the corresponding foreground picture of medium filtering template is following,
The n*n pixel value can be ranked up first, then choose intermediate value to replace
Change in foreground picture and correspond toThe pixel value of position.Such as medium filtering template is the medium filtering mould of 3*3
Plate, do the neighborhood territory pixel point of medium filtering, i.e., nine to have at least five pixels to the foreground picture of two-value is foreground point, and the point just can quilt
It is judged to foreground point, this mode can remove some independent small noises.
Optionally, expansion and corrosion can use the template of 3*3,Wherein origin is the mould
Plate central point.Wherein, expansion is to do and computing the respective pixel value of template and foreground picture, if 0, the corresponding two-value of origin
The value of picture position is entered as 0, is otherwise 255.Corrosion is that template is done and computing with the pixel value of corresponding bianry image, such as
Fruit is 255, and the value of the corresponding bianry image position of origin is entered as 255, is otherwise 0.Certainly, under some application scenarios, also
Can be expanded using other modes and etching operation.
205th, connected region extraction operation is carried out to obtain above-mentioned prospect to carrying out the above-mentioned foreground picture after Morphological scale-space
The b moving target that figure is included.
206th, the movement locus of above-mentioned b moving target is determined based on the corresponding Motion mask of above-mentioned b moving target.
Optionally, in some possible implementation methods of the invention, based on the corresponding motion mould of above-mentioned b moving target
Plate determines the movement locus of above-mentioned moving target, it may include:Calculate each motion that above-mentioned b moving target is concentrated with Motion mask
Euclidean distance between template;Between each Motion mask concentrated based on the above-mentioned b moving target for calculating and Motion mask
Above-mentioned Euclidean distance, the matching degree between each Motion mask that above-mentioned b moving target and Motion mask are concentrated is determined, if base
The first Motion mask matched with the moving target j in above-mentioned b moving target is determined in above-mentioned matching degree, by above-mentioned motion
The reference point coordinates of target j is added in the movement locus array in above-mentioned first Motion mask, based on the above-mentioned first motion mould
The reference point coordinates of current record determines the movement locus of above-mentioned moving target j, above-mentioned first in movement locus array in plate
Motion mask is one of Motion mask that above-mentioned Motion mask is concentrated.
Additionally, above-mentioned motion target tracking method can also be further included:If determining above-mentioned fortune based on above-mentioned matching degree
Moving-target j is mismatched with any one Motion mask that above-mentioned Motion mask is concentrated, and can generate above-mentioned moving target j correspondences
The first Motion mask, above-mentioned first Motion mask is added among above-mentioned Motion mask collection, wherein, it is above-mentioned first motion mould
The reference point coordinates of above-mentioned moving target j is have recorded in the movement locus array of plate.
Optionally, in some possible implementation methods of the invention, above-mentioned b moving target and Motion mask collection are calculated
In each Motion mask between Euclidean distance, can include:Using Motion mask collection and above-mentioned b moving target construction distance
Matrix D ist.
Where it is assumed that the number that above-mentioned Motion mask integrates the current Motion mask for including is m.
Wherein, the line number m of above-mentioned Distance matrix D ist is equal to the individual of the current Motion mask for including of above-mentioned Motion mask collection
Number;The columns b of above-mentioned Distance matrix D ist is equal to the number of the moving target that the above-mentioned foreground picture for obtaining is included;In above-mentioned Dist
Element DijRepresent the Europe between the Motion mask i that moving target j and above-mentioned Motion mask in above-mentioned b moving target are concentrated
Formula distance.For example, D24Represent Motion mask 2 that the moving target 4 in above-mentioned b moving target concentrates with above-mentioned Motion mask it
Between Euclidean distance
Wherein, above-mentioned DiCan be for example calculated by equation below,
Wherein, above-mentioned (xi,yi) be the Motion mask i that above-mentioned Motion mask is concentrated reference point coordinates, above-mentioned (xj,yj)
It is the reference point coordinates of above-mentioned moving target, above-mentioned (dx, dy) is the predictive displacement of record in above-mentioned Motion mask i, and above-mentioned T is
First threshold, the span of first threshold T for example can be [20,150], specifically for example can for 20,25,30,35,40,50,
120th, 140 or 150 etc..
For example, it is assumed that the number of the current Motion mask for including of above-mentioned Motion mask collection is 3, the above-mentioned foreground picture bag for obtaining
The number of the moving target for containing is b, then,
Optionally, in some possible implementation methods of the invention, based on the above-mentioned b moving target for calculating and fortune
Above-mentioned Euclidean distance between each Motion mask that moving platen is concentrated, determines above-mentioned b moving target with Motion mask concentration
Matching degree between each Motion mask, can include:
Based on above-mentioned distance matrix construction matching matrix Match;
Wherein, the line number m of above-mentioned matching matrix Match is equal to the individual of the current Motion mask for including of above-mentioned Motion mask collection
Number;The columns b of above-mentioned matching matrix Match is equal to the number of the moving target that the above-mentioned foreground picture for obtaining is included.It is above-mentioned
Element M in MatchijRepresent the Motion mask i that the moving target j in above-mentioned b moving target is concentrated with above-mentioned Motion mask
Between matching degree, wherein, the matching degree between the Motion mask i that above-mentioned moving target j and above-mentioned Motion mask are concentrated is based on
Euclidean distance determines between the Motion mask i that above-mentioned moving target j and above-mentioned Motion mask are concentrated.
Optionally, in some possible implementation methods of the invention, above-mentioned MijIt is calculated by equation below,
Wherein, element MkjRepresent matching between the Motion mask k that above-mentioned moving target j is concentrated with above-mentioned Motion mask
Degree.Wherein, a1 can be positive number, and such as a1 can be equal to 0.2,1,2,3,4.5,8.3,10 or other values.Wherein, matrix is matched
The initial value of each element can be 0 in Match, often be gone for matching matrix Match and per column count MijAfterwards, 0≤Mij≤2*
a1。
Wherein, as the M in matching matrix MatchijEqual to 2*a1, then it is assumed that above-mentioned moving target j and above-mentioned Motion mask
The Motion mask i matchings of concentration.
Optionally, in some possible implementation methods of the invention, if being determined and above-mentioned fortune based on above-mentioned matching degree
After first Motion mask of moving-target j matchings, above-mentioned motion target tracking method can also be further included:Transported to above-mentioned first
The predictive displacement (dx, dy) included in moving platen is updated as follows:
Wherein (xj,yj) be above-mentioned moving target j reference point coordinates, (xi,yi) it is the above-mentioned first fortune
The reference point coordinates of the movement locus array of moving platen newest addition before the reference point coordinates of above-mentioned moving target j is added.
Optionally, in some possible implementation methods of the invention, it is above-mentioned based on above-mentioned matching degree determine with it is upper
State after the first Motion mask of moving target matching, above-mentioned motion target tracking method can also be further included:By above-mentioned
The confidence level Threshold_In1 that above-mentioned first Motion mask recorded in one Motion mask turns into motion tracking template adds s1,
Above-mentioned s1 is positive integer.Above-mentioned s1 can for example be equal to 1,2,3 or 4,6,8,10,20 or 51 or other positive integers.
Optionally, in some possible implementation methods of the invention, above-mentioned Motion mask is concentrated and is not transported with above-mentioned b
The confidence level Threshold_In1 as motion tracking template recorded in the Motion mask of moving-target matching subtracts s1, will move
The confidence level that the Motion mask recorded in the Motion mask not matched with above-mentioned b moving target in template set disappears
Threshold_In2 subtracts s2, and above-mentioned s2 is positive integer.Above-mentioned s2 can for example be equal to 1,2,3,4,6,8,10,20 or 51 or its
His positive integer.
Further, the confidence level that the Motion mask of certain Motion mask record concentrated when above-mentioned Motion mask disappears
Threshold_In2 is less than or equal to S22 (S22 e.g., less than or equal to 0), then can be by the Motion mask from above-mentioned Motion mask
Concentrate and reject.
Further, the motion recorded in above-mentioned first Motion mask can be also updated using the area of above-mentioned moving target
The area of target, can also update the moving target of record in above-mentioned first Motion mask using the position of above-mentioned moving target
Position.
Optionally, in some possible implementation methods of the invention, the above-mentioned motion based in above-mentioned first Motion mask
The reference point coordinates of current record in the array of track determines the movement locus of above-mentioned moving target, can include:Above-mentioned first
The confidence level Threshold_In1 of the motion tracking template recorded in Motion mask is more than threshold value Thr1 (wherein, threshold value Thr1
For example can be 8,10.11,15 etc., the initial value of Threshold_In1 for example can be 0,1.2 etc.), and/or in the above-mentioned first fortune
The confidence level Threshold_In2 that the Motion mask recorded in moving platen disappears is more than threshold value Thr2 (wherein, threshold value Thr2
Such as can be 20,22 or the initial value of 25, Threshold_In2 be, for example, 25) based on the motion in above-mentioned first Motion mask
The reference point coordinates of current record in the array of track determines the movement locus of above-mentioned moving target.
Optionally, in some possible implementation methods of the invention, the above-mentioned motion based in above-mentioned first Motion mask
The reference point coordinates of current record in the array of track determines the movement locus of above-mentioned moving target, can include:
P1 of earliest record in the movement locus array in above-mentioned first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P1-1 direction gradient, in the above-mentioned P1-1 direction gradient of calculating
The angle of adjacent direction gradient, to obtain P1-2 angle, above-mentioned P1 is the positive integer more than 2.
P2 recorded the latest in the movement locus array in above-mentioned first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P2-1 direction gradient, in the above-mentioned P2-1 direction gradient of calculating
The angle of adjacent direction gradient, to obtain P2-2 angle, above-mentioned P2 is the positive integer more than 2.Wherein, in the first Motion mask
Movement locus array in two neighboring reference point coordinates refer to add the first Motion mask in movement locus array time
Adjacent two refer to point coordinates, and adjacent direction gradient refers to based on the movement locus array added in the first Motion mask
The both direction gradient that temporally adjacent three are calculated with reference to point coordinates.For example, being first with reference to point coordinates f1, f2 and f2
Adjacent three refer to point coordinates in movement locus array in Motion mask, it is assumed that with reference to the direction between point coordinates f1 and f2
Gradient is f1_2, is f2_3 with reference to the direction gradient between point coordinates f2 and f3, and direction gradient is f2_3 and direction gradient is f1_
2 is adjacent direction gradient.For ease of record, the time of the movement locus array in Motion mask (the first Motion mask) is added
Numbering in more late reference point coordinates, its movement locus array in the Motion mask is bigger or smaller.
If the angle quantity in above-mentioned P1-2 angle more than the first angle threshold value is more than P3, and if above-mentioned P2-2 angle
In more than the second angle threshold value ginseng of the angle quantity more than the movement locus array record in P4, and above-mentioned first Motion mask
The area of the movement locus region corresponding to examination point coordinate is more than the first area threshold, then using above-mentioned first Motion mask
In movement locus array in the reference point coordinates of current record draw and obtain the movement locus of above-mentioned moving target.
Wherein, where the movement locus corresponding to the reference point coordinates of the movement locus array record in the first Motion mask
Region, can be where the movement locus corresponding to the reference point coordinates of the movement locus array record in the first Motion mask most
Big circumscribed rectangular region.
Wherein, the span of the first angle threshold value can be for example [0 °, 180 °], and preferably span for example can be
[30 °, 90 °], can specifically be equal to 30 °, 38 °, 45 °, 60 °, 70 °, 90 ° or other angles.
Wherein, the span of the second angle threshold value can be for example [0 °, 180 °], and preferably span for example can be
[30 °, 90 °], can specifically be equal to 30 °, 38 °, 48 °, 65 °, 77 °, 90 ° or other angles.
Wherein, the span of the first area threshold for example can be [10,50], specifically for example can be equal to 10,15 or 21,
25th, 30,35,40,44,48 or 50 or other values.
Where it is assumed that, XminRepresent the minimum X-coordinate value in the reference point coordinates of movement locus array record, XmaxTable
Show the maximum X-coordinate value in the reference point coordinates of movement locus array record.YminRepresent the reference of movement locus array record
Minimum Y-coordinate value in point coordinates, YmaxRepresent the maximum Y-coordinate value in the reference point coordinates of movement locus array record.
Wherein, (Xmin,Ymin) and (Xmax,Ymax) between Euclidean distance, can be above-mentioned first Motion mask in movement locus array
The area of the movement locus region corresponding to the reference point coordinates of record.
Wherein, the angle of both direction gradient presss from both sides cosine of an angle and obtains by both direction gradient, the angle of direction gradient
Cosine be equal to direction gradient inner product divided by the mould of two direction gradients product, specific formula for calculation can be as follows:
Wherein, (dxi,dyi) it is current direction gradient, (dxi+1,dyi+1) it is adjacent next direction gradient.
Wherein, P1 and P2 for example can be equal to 3 or 4,5,6 equal to 10,12,15,8, P3 equal to 6 or 5 or 4, P4.If fortune
Moving platen is not locked, then add 2 by the original position of tracing point, i.e., next calculated direction gradient when tracing point open
Beginning, position skipped initial two points of current calculated position, that is to say, that StartPos=StartPos+2.
Optionally, in some possible implementation methods of the invention, point coordinates or barycenter centered on above-mentioned reference point coordinates
Point coordinates or other point coordinates.The reference point coordinates of such as moving target can be that the center point coordinate or center of mass point of moving target are sat
Other point coordinates on mark or moving target.
It is appreciated that for the b tracking mode of moving target, can be similar to the example above to moving target j's
Tracking mode.
As can be seen that in the present embodiment after the foreground picture to image carries out Morphological scale-space;To carrying out Morphological scale-space
Above-mentioned foreground picture afterwards carries out connected region and extracts operation to obtain the b moving target that above-mentioned foreground picture is included;Based on b fortune
The corresponding Motion mask of moving-target determines all or part of movement locus in b moving target.Due to being that such scheme is
Moving target generally granularity carries out Object tracking, with prior art needs each pixel to be tracked computing as granularity
Compare, above-mentioned technical proposal of the present invention is conducive to the computation complexity of larger reduction pursuit movement target.
The relevant apparatus for implementing such scheme are also provided below.
Referring to a kind of motion target tracking device 300 that Fig. 3, the present invention are provided, it may include:
Acquiring unit 301, obtaining unit 302, processing unit 303, extraction unit 304 and tracking treatment unit 305.
Wherein, acquiring unit 301, for obtaining image from video sequence.
Obtaining unit 302, the foreground picture for obtaining above-mentioned image.
Processing unit 303, for carrying out Morphological scale-space to above-mentioned foreground picture.
Extraction unit 304, for carry out the above-mentioned foreground picture after Morphological scale-space carry out connected region extract operation with
Obtain the moving target that above-mentioned foreground picture is included.
Tracking treatment unit 305, for determining above-mentioned moving target based on the corresponding Motion mask of above-mentioned moving target
Movement locus.
Optionally, in some possible implementation methods of the invention,
Above-mentioned tracking treatment unit 305 is specifically for calculating each motion mould that above-mentioned moving target is concentrated with Motion mask
Euclidean distance between plate;Between each Motion mask concentrated based on the above-mentioned above-mentioned moving target for calculating and Motion mask
Above-mentioned Euclidean distance, determines the matching degree between each Motion mask among above-mentioned moving target and above-mentioned Motion mask collection;If
The first Motion mask matched with above-mentioned moving target is determined based on above-mentioned matching degree, the reference point of above-mentioned moving target is sat
Mark is added in the movement locus array in above-mentioned first Motion mask, based on the movement locus number in above-mentioned first Motion mask
The reference point coordinates of current record in group determines the movement locus of above-mentioned moving target, wherein, above-mentioned first Motion mask is upper
State the one of Motion mask among Motion mask collection.
Optionally, in some possible implementation methods of the invention,
Above-mentioned tracking treatment unit 305 is additionally operable to, if determining above-mentioned moving target with above-mentioned fortune based on above-mentioned matching degree
Any one Motion mask that moving platen is concentrated is mismatched, and generates corresponding first Motion mask of above-mentioned moving target, will be upper
The first Motion mask is stated to be added among above-mentioned Motion mask collection, wherein, in the movement locus array of above-mentioned first Motion mask
Have recorded the reference point coordinates of above-mentioned moving target.
Optionally, in some possible implementation methods of the invention, in the above-mentioned moving target of above-mentioned calculating and motion mould
The aspect of the Euclidean distance between each Motion mask that plate is concentrated, above-mentioned tracking treatment unit 305 is specifically for using moving mould
Plate collection and above-mentioned moving target construction Distance matrix D ist;
Wherein, the line number m of above-mentioned distance matrix is equal to the number of the current Motion mask for including of above-mentioned Motion mask collection;On
State the element D in DistiRepresent the Euclidean distance between the Motion mask i that above-mentioned moving target and above-mentioned Motion mask are concentrated.
Optionally, in some possible implementation methods of the invention, above-mentioned DiCan be calculated by equation below,
Wherein, above-mentioned T is first threshold;
Above-mentioned (xj,yj) be above-mentioned moving target reference point coordinates;
Above-mentioned (xi,yi) be the Motion mask i that above-mentioned Motion mask is concentrated reference point coordinates;
Above-mentioned (dx, dy) is the predictive displacement of record in above-mentioned Motion mask i.
Optionally, in some possible implementation methods of the invention, above-mentioned based on the above-mentioned moving target for calculating
Above-mentioned Euclidean distance between each Motion mask concentrated with Motion mask, determines above-mentioned moving target with Motion mask concentration
The aspect of the matching degree between each Motion mask, above-mentioned tracking treatment unit 305 is specifically for based on above-mentioned distance matrix construction
Matching matrix Match;
Wherein, the line number m of above-mentioned matching matrix is equal to the number of the current Motion mask for including of above-mentioned Motion mask collection;On
State the element M in MatchiThe matching degree between the Motion mask i that above-mentioned moving target and above-mentioned Motion mask are concentrated is represented, its
In, the matching degree between the Motion mask i that above-mentioned moving target and above-mentioned Motion mask are concentrated, based on above-mentioned moving target with it is upper
Euclidean distance determines between stating the Motion mask i of Motion mask concentration.
Optionally, in some possible implementation methods of the invention,
Above-mentioned MiIt is calculated by equation below,
Optionally, in some possible implementation methods of the invention, if it is above-mentioned based on above-mentioned matching degree determine with it is upper
State after the first Motion mask of moving target matching, above-mentioned tracking treatment unit 305 is additionally operable to above-mentioned first Motion mask
In the predictive displacement (dx, dy) that includes updated as follows:
Wherein (xi1,yi1) be above-mentioned moving target reference point coordinates, (xi,yi) it is the above-mentioned first fortune
The reference point coordinates of the movement locus array of moving platen newest addition before the reference point coordinates of above-mentioned moving target is added.
Optionally, in some possible implementation methods of the invention, it is above-mentioned based on above-mentioned matching degree determine with it is upper
State after the first Motion mask of moving target matching, above-mentioned tracking treatment unit 305 is additionally operable to above-mentioned first Motion mask
The confidence level Threshold_In1 that above-mentioned first Motion mask of middle record turns into motion tracking template adds s1;
To be recorded in other Motion masks of above-mentioned Motion mask concentration in addition to above-mentioned first Motion mask as fortune
The confidence level Threshold_In1 of motion tracking template subtracts s1, and what the Motion mask recorded in above-mentioned other Motion masks disappeared puts
Reliability Threshold_In2 subtracts s2, wherein, above-mentioned s1 is positive integer, and above-mentioned s2 is positive integer.
Optionally, in some possible implementation methods of the invention, above-mentioned based in above-mentioned first Motion mask
The reference point coordinates of current record in movement locus array determines the aspect of the movement locus of above-mentioned moving target, at above-mentioned tracking
Reason unit 305 specifically for,
P1 of earliest record in the movement locus array in above-mentioned first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P1-1 direction gradient, in the above-mentioned P1-1 direction gradient of calculating
The angle of adjacent direction gradient, to obtain P1-2 angle, above-mentioned P1 is the positive integer more than 2;
P2 recorded the latest in the movement locus array in above-mentioned first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P2-1 direction gradient, in the above-mentioned P2-1 direction gradient of calculating
The angle of adjacent direction gradient, to obtain P2-2 angle, above-mentioned P2 is the positive integer more than 2;
If the angle quantity in above-mentioned P1-2 angle more than the first angle threshold value is more than P3, and if above-mentioned P2-2 angle
In more than the second angle threshold value ginseng of the angle quantity more than the movement locus array record in P4, and above-mentioned first Motion mask
The area of the movement locus region corresponding to examination point coordinate is more than the first area threshold, then using above-mentioned first Motion mask
In movement locus array in the reference point coordinates of current record draw and obtain the movement locus of above-mentioned moving target.
Optionally, in some possible implementation methods of the invention, point coordinates or matter centered on above-mentioned reference point coordinates
Heart point coordinates.
It is understood that the function of each functional module of the motion target tracking device 300 of the present embodiment can be according to upper
The method stated in embodiment of the method is implemented, and it implements the associated description that process is referred to above method embodiment,
Here is omitted.
As can be seen that in the present embodiment after the foreground picture to image carries out Morphological scale-space;To carrying out Morphological scale-space
Above-mentioned foreground picture afterwards carries out connected region and extracts operation to obtain the moving target that above-mentioned foreground picture is included;Based on above-mentioned motion
The corresponding Motion mask of target determines the movement locus of above-mentioned moving target.Due to be such scheme be with moving target generally
Granularity carries out Object tracking, compared with prior art needs using each pixel to be tracked computing as granularity, the present invention
Above-mentioned technical proposal is conducive to the computation complexity of larger reduction pursuit movement target.
Referring to Fig. 4, Fig. 4 is the structured flowchart of the motion target tracking device 400 that an alternative embodiment of the invention is provided.
Motion target tracking device 400 may include:At least one processor 401, memory 405 and at least one communication bus
402.Communication bus 402 is used to realize the connection communication between these components.
Optionally, the motion target tracking device 400 may also include:At least one network interface 404 and user interface 403
Deng.Wherein, optionally, user interface 403 includes display (such as touch-screen, liquid crystal display or holographic imaging (English:
Holographic) or projection (English:Projector) etc.), pointing device (such as mouse, trace ball (English:
Trackball) touch-sensitive plate or touch-screen etc.), camera and/or sound pick up equipment etc..
Wherein, memory 405 can include read-only storage and random access memory, and refer to the offer of processor 401
Order and data.A part of in memory 405 can also include nonvolatile RAM.
In some possible implementation methods, memory 405 stores following element, can perform module or data knot
Structure, or their subset, or their superset:
Operating system and application program.
Wherein, application program may include acquiring unit 301, obtaining unit 302, processing unit 303, the and of extraction unit 304
Tracking treatment unit 305 etc..
In embodiments of the present invention, processor 401 performs the code in memory 405 or instruction, for from video sequence
Image is obtained in row;Obtain the foreground picture of above-mentioned image;Morphological scale-space is carried out to above-mentioned foreground picture;To carrying out Morphological scale-space
Above-mentioned foreground picture afterwards carries out connected region and extracts operation to obtain the moving target that above-mentioned foreground picture is included;Based on above-mentioned motion
The corresponding Motion mask of target determines the movement locus of above-mentioned moving target.
Wherein, the foreground picture of image may include one or more moving targets, it is also possible to not comprising any motion mesh
Mark.It is main in the embodiment of the present invention to be illustrated by taking scene of the foreground picture comprising one or more moving targets as an example.
Optionally, in some possible implementation methods of the invention, processor 401 obtains the prospect legend of above-mentioned image
Can such as include:Processor 401 obtains the Background of above-mentioned image, and above-mentioned image is obtained based on above-mentioned image and above-mentioned Background
Foreground picture.Wherein, Background does not include moving target, and Background can be divided into gray scale Background and color background figure.Wherein,
Foreground picture is the image for marking moving target, non-athletic target area is all such as marked as into black, by motion target area
All it is marked as white.For example background modeling method can use mixed Gaussian background modeling algorithm, i.e., read from video sequence
Image is taken, Background is set up using image and mixed Gaussian background modeling algorithm.Image and Background are calculated by background difference
Method carries out treatment to image and Background and obtains foreground picture, and the foreground picture of image may include one or more moving targets,
Any moving target may not included.It is main with scene of the foreground picture comprising one or more moving targets in the embodiment of the present invention
As a example by illustrate.
Wherein, in some possible implementation methods of the invention, processor 401 is carried out at morphology to above-mentioned foreground picture
Reason can include:Processor 401 carries out following at least one Morphological scale-space to above-mentioned foreground picture:Filtering process, dilation erosion
Treatment, opening operation treatment and closed operation treatment.
If for example, foreground picture contains larger noise, treatment and dilation erosion treatment can be filtered to foreground picture.
Median filter process first can be carried out to remove noise to foreground picture, then to filtering process after foreground picture carry out at first expansion
Reason post-etching treatment, to remove the cavity of moving target, and the fragment of moving target is merged.Assuming that medium filtering template
It is the template of n*n, foreground picture is scanned using medium filtering template.Wherein.The preferred values of the n are odd number, n values example
Such as can 3,5,7,9,11 or 13.
When the pixel value of relevant position in the corresponding foreground picture of medium filtering template is following,
The n*n pixel value can be ranked up first, then choose intermediate value to replace
Change in foreground picture and correspond toThe pixel value of position.Such as medium filtering template is the medium filtering mould of 3*3
Plate, do the neighborhood territory pixel point of medium filtering, i.e., nine to have at least five pixels to the foreground picture of two-value is foreground point, and the point just can quilt
It is judged to foreground point, this mode can remove some independent small noises.
Optionally, expansion and corrosion can use the template of 3*3,Wherein origin is to be somebody's turn to do
Template center's point.Wherein, expansion is to do and computing the respective pixel value of template and foreground picture, if 0, origin corresponding two
The value of value picture position is entered as 0, is otherwise 255.Corrosion is that template is done and computing with the pixel value of corresponding bianry image,
If 255, the value of the corresponding bianry image position of origin is entered as 255, is otherwise 0.Certainly, under some application scenarios,
Also can be expanded using other modes and etching operation.
Optionally, in some possible implementation methods of the invention, it is corresponding that processor 401 is based on above-mentioned moving target
Motion mask determines the movement locus of above-mentioned moving target, can include:Processor 401 calculates above-mentioned moving target with motion mould
Euclidean distance between each Motion mask that plate is concentrated;Based on each fortune that the above-mentioned moving target for calculating is concentrated with Motion mask
Above-mentioned Euclidean distance between moving platen, determines matching between each Motion mask that above-mentioned moving target is concentrated with Motion mask
Degree, if determining the first Motion mask matched with above-mentioned moving target based on above-mentioned matching degree, by the ginseng of above-mentioned moving target
Examination point coordinate is added in the movement locus array in above-mentioned first Motion mask, based on the motion in above-mentioned first Motion mask
The reference point coordinates of current record in the array of track determines the movement locus of above-mentioned moving target, wherein, above-mentioned first motion mould
Plate is one of Motion mask that above-mentioned Motion mask is concentrated.
If additionally, processor 401 can also be used to determine above-mentioned moving target with above-mentioned motion mould based on above-mentioned matching degree
Any one Motion mask that plate is concentrated is mismatched, and can generate corresponding first Motion mask of above-mentioned moving target, will be upper
The first Motion mask is stated to be added among above-mentioned Motion mask collection, wherein, in the movement locus array of above-mentioned first Motion mask
Have recorded the reference point coordinates of above-mentioned moving target.
Optionally, in some possible implementation methods of the invention, processor 401 calculates above-mentioned moving target with motion
The Euclidean distance between each Motion mask in template set, can include:
Using Motion mask collection and above-mentioned moving target construction Distance matrix D ist;
Wherein, the line number m of above-mentioned distance matrix is equal to the number of the current Motion mask for including of above-mentioned Motion mask collection;On
State the element D in DistiRepresent the Euclidean distance between the Motion mask i that above-mentioned moving target and above-mentioned Motion mask are concentrated.
Wherein, above-mentioned DiCan be for example calculated by equation below,
Wherein, above-mentioned (xi,yi) be the Motion mask i that above-mentioned Motion mask is concentrated reference point coordinates, above-mentioned (xj,yj)
It is the reference point coordinates of above-mentioned moving target, above-mentioned (dx, dy) is the predictive displacement of record in above-mentioned Motion mask i, and above-mentioned T is
First threshold, the span of first threshold T for example can be [20,150], specifically for example can for 20,25,30,35,40,50,
120th, 140 or 150 etc..
Optionally, in some possible implementation methods of the invention, processor 401 is based on the above-mentioned motion mesh for calculating
Above-mentioned Euclidean distance between each Motion mask that mark and Motion mask are concentrated, determines that above-mentioned moving target is concentrated with Motion mask
Each Motion mask between matching degree, can include:
Based on above-mentioned distance matrix construction matching matrix Match;
Wherein, the line number m of above-mentioned matching matrix Match is equal to the individual of the current Motion mask for including of above-mentioned Motion mask collection
Number;Element M in above-mentioned matching matrix MatchiRepresent Motion mask i that above-mentioned moving target concentrates with above-mentioned Motion mask it
Between matching degree, wherein, the matching degree between the Motion mask i that above-mentioned moving target and above-mentioned Motion mask are concentrated, based on upper
Euclidean distance determines between stating the Motion mask i that moving target and above-mentioned Motion mask are concentrated.
Optionally, in some possible implementation methods of the invention, above-mentioned MiIt is calculated by equation below,
Wherein, element MkRepresent the matching degree between the Motion mask k that above-mentioned moving target and above-mentioned Motion mask are concentrated.
Optionally, in other possible implementation methods of the invention, above-mentioned MiIt is calculated also by equation below,
Wherein, a1 can be positive number, and such as a1 can be equal to 0.2,1,2,3,4.5,8.3 or other values.
Wherein, M is worked asiMore than other elements (such as M in matching matrix MatchiEqual to a1), then it is assumed that above-mentioned motion mesh
The Motion mask i concentrated with above-mentioned Motion mask is marked to match.
Optionally, in some possible implementation methods of the invention, if it is above-mentioned based on above-mentioned matching degree determine with it is upper
State moving target matching the first Motion mask after (for example, when matching between above-mentioned moving target and the first Motion mask
Matching degree of the degree more than other Motion masks that above-mentioned moving target is concentrated with above-mentioned Motion mask), processor 401 can be additionally used in
Predictive displacement (dx, dy) to being included in above-mentioned first Motion mask is updated as follows:
Wherein (xj,yj) be above-mentioned moving target reference point coordinates, (xi,yi) it is the above-mentioned first fortune
The reference point coordinates of the movement locus array of moving platen newest addition before the reference point coordinates of above-mentioned moving target is added.
Optionally, in some possible implementation methods of the invention, it is above-mentioned based on above-mentioned matching degree determine with it is upper
State after the first Motion mask of moving target matching, processor 401 can be additionally used in record in above-mentioned first Motion mask
The confidence level Threshold_In1 that above-mentioned first Motion mask turns into motion tracking template adds s1, above-mentioned s1 to be positive integer.It is above-mentioned
S1 can for example be equal to 1,2,3,4,6 or 8,10,20 or 51 or other positive integers.
To be recorded in other Motion masks of above-mentioned Motion mask concentration in addition to above-mentioned first Motion mask as fortune
The confidence level Threshold_In1 of motion tracking template subtracts s1, and what the Motion mask recorded in above-mentioned other Motion masks disappeared puts
Reliability Threshold_In2 subtracts s2, and above-mentioned s2 is positive integer.Above-mentioned s2 can for example be equal to 1,2,3,4,6,8,10,20 or 51
Or other positive integers.
Further, the confidence level that the Motion mask of certain Motion mask record concentrated when above-mentioned Motion mask disappears
Threshold_In2 is less than or equal to S22 (S22 e.g., less than or equal to 0), then can be by the Motion mask from above-mentioned Motion mask
Concentrate and reject.
Further, during processor 401 can also update above-mentioned first Motion mask using the area of above-mentioned moving target
The area of the moving target of record, can also update what is recorded in above-mentioned first Motion mask using the position of above-mentioned moving target
The position of moving target.
Optionally, in some possible implementation methods of the invention, processor 401 is above-mentioned based on above-mentioned first Motion mask
In movement locus array in the reference point coordinates of current record determine the movement locus of above-mentioned moving target, can include:
P1 of earliest record in the movement locus array in above-mentioned first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P1-1 direction gradient, in the above-mentioned P1-1 direction gradient of calculating
The angle of adjacent direction gradient, to obtain P1-2 angle, above-mentioned P1 is the positive integer more than 2.
P2 recorded the latest in the movement locus array in above-mentioned first Motion mask is calculated with reference to adjacent in point coordinates
Direction gradient between two reference point coordinates, to obtain P2-1 direction gradient, in the above-mentioned P2-1 direction gradient of calculating
The angle of adjacent direction gradient, to obtain P2-2 angle, above-mentioned P2 is the positive integer more than 2.
If the angle quantity in above-mentioned P1-2 angle more than the first angle threshold value is more than P3, and if above-mentioned P2-2 angle
In more than the second angle threshold value ginseng of the angle quantity more than the movement locus array record in P4, and above-mentioned first Motion mask
The area of the movement locus region corresponding to examination point coordinate is more than the first area threshold, then using above-mentioned first Motion mask
In movement locus array in the reference point coordinates of current record draw and obtain the movement locus of above-mentioned moving target.
Wherein, two neighboring reference point coordinates refers to add first to move in the movement locus array in the first Motion mask
Temporally adjacent two of the movement locus array in template refer to point coordinates, and adjacent direction gradient refers to based on addition first
The both direction gradient that temporally adjacent three of the movement locus array in Motion mask are calculated with reference to point coordinates.Example
Such as, it is that adjacent in movement locus array in the first Motion mask three refer to point coordinates with reference to point coordinates f1, f2 and f2, it is false
It is f2_3 with reference to the direction gradient between point coordinates f2 and f3 if being f1_2 with reference to the direction gradient between point coordinates f1 and f2,
Direction gradient is f2_3 and direction gradient for f1_2 is adjacent direction gradient.For ease of record, Motion mask (the first fortune is added
Moving platen) in movement locus array time more late reference point coordinates, its movement locus array in the Motion mask
In numbering it is bigger or smaller.
Wherein, where the movement locus corresponding to the reference point coordinates of the movement locus array record in the first Motion mask
Region, can be where the movement locus corresponding to the reference point coordinates of the movement locus array record in the first Motion mask most
Big circumscribed rectangular region.
Wherein, the span of the first angle threshold value can be for example [0 °, 180 °], and preferably span for example can be
[30 °, 90 °], can specifically be equal to 30 °, 38 °, 45 °, 60 °, 70 °, 90 ° or other angles.
Wherein, the span of the second angle threshold value can be for example [0 °, 180 °], and preferably span for example can be
[30 °, 90 °], can specifically be equal to 30 °, 38 °, 48 °, 65 °, 77 °, 90 ° or other angles.
Wherein, the span of the first area threshold for example can be [10,50], specifically for example can be equal to 10,15 or 21,
25th, 30,35,40,44,48 or 50 or other values.
Where it is assumed that, XminRepresent the minimum X-coordinate value in the reference point coordinates of movement locus array record, XmaxTable
Show the maximum X-coordinate value in the reference point coordinates of movement locus array record.YminRepresent the reference of movement locus array record
Minimum Y-coordinate value in point coordinates, YmaxRepresent the maximum Y-coordinate value in the reference point coordinates of movement locus array record.
Wherein, (Xmin,Ymin) and (Xmax,Ymax) between Euclidean distance, can be above-mentioned first Motion mask in movement locus array
The area of the movement locus region corresponding to the reference point coordinates of record.
Wherein, the angle of both direction gradient presss from both sides cosine of an angle and obtains by both direction gradient, the angle of direction gradient
Cosine be equal to direction gradient inner product divided by the mould of two direction gradients product, specific formula for calculation can be as follows:
Wherein, (dxi,dyi) it is current direction gradient, (dxi+1,dyi+1)It is adjacent next direction gradient.
Wherein, P1 and P2 for example can be equal to 3 or 4,5,6 equal to 10,12,15,8, P3 equal to 6 or 5 or 4, P4.If fortune
Moving platen is not locked, then add 2 by the original position of tracing point, i.e., next calculated direction gradient when tracing point open
Beginning, position skipped initial two points of current calculated position, that is to say, that StartPos=StartPos+2.
Optionally, in some possible implementation methods of the invention, point coordinates or barycenter centered on above-mentioned reference point coordinates
Point coordinates or other point coordinates.The reference point coordinates of such as moving target can be that the center point coordinate or center of mass point of moving target are sat
Other point coordinates on mark or moving target.
If it is appreciated that comprising multiple moving targets in foreground picture, each moving target can enter in the manner described above
Line trace.
It is understood that the function of each functional module of the motion target tracking device 400 of the present embodiment can be according to upper
The method stated in embodiment of the method is implemented, and it implements the associated description that process is referred to above method embodiment,
Here is omitted.
Wherein, motion target tracking device 400 can be such as mobile phone, and panel computer, PC, is taken the photograph notebook computer
Camera, monitor etc. equipment.
As can be seen that motion target tracking device 400 carries out Morphological scale-space in the foreground picture to image in the present embodiment
Afterwards;Connected region extraction operation is carried out to obtain the fortune that above-mentioned foreground picture is included to carrying out the above-mentioned foreground picture after Morphological scale-space
Moving-target;The movement locus of above-mentioned moving target is determined based on the corresponding Motion mask of above-mentioned moving target.Due to being above-mentioned side
Case is to carry out Object tracking with moving target generally granularity, needs to be carried out as granularity using each pixel with prior art
Tracking computing is compared, and above-mentioned technical proposal of the present invention is conducive to the computation complexity of larger reduction pursuit movement target.
The embodiment of the present invention also provides a kind of computer-readable storage medium, wherein, the computer-readable storage medium can be stored with journey
Sequence, the part or all of step including the motion target tracking method described in the above method embodiment when program is performed.
It should be noted that for foregoing each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the present invention not by described by sequence of movement limited because
According to the present invention, some steps can sequentially or simultaneously be carried out using other.Secondly, those skilled in the art should also know
Know, embodiment described in this description belongs to preferred embodiment, involved action and module is not necessarily of the invention
It is necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not have the portion described in detail in certain embodiment
Point, may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed device, can be by another way
Realize.For example, device embodiment described above is only schematical, the division of such as said units is only one kind
Division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can combine or can
To be integrated into another system, or some features can be ignored, or not perform.It is another, it is shown or discussed each other
Coupling or direct-coupling or communication connection can be the INDIRECT COUPLING or communication connection of device or unit by some interfaces,
Can be electrical or other forms.
The above-mentioned unit that is illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be according to the actual needs selected to realize the mesh of this embodiment scheme
's.
In addition, during each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.Above-mentioned integrated list
Unit can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If above-mentioned integrated unit is to realize in the form of SFU software functional unit and as independent production marketing or use
When, can store in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part for being contributed to prior art in other words or all or part of the technical scheme can be in the form of software products
Embody, the computer software product is stored in a storage medium, including some instructions are used to so that a computer
Equipment (can be personal computer, server or network equipment etc.) perform each embodiment above method of the invention whole or
Part steps.And foregoing storage medium includes:USB flash disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited
Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. are various can be with store program codes
Medium.
Above-mentioned above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although with reference to preceding
Embodiment is stated to be described in detail the present invention, it will be understood by those within the art that:It still can be to preceding
State the technical scheme described in each embodiment to modify, or equivalent is carried out to which part technical characteristic;And these
Modification is replaced, and does not make the spirit and scope of the essence disengaging various embodiments of the present invention technical scheme of appropriate technical solution.
Claims (19)
1. a kind of motion target tracking method, it is characterised in that including:
Image is obtained from video sequence;
Obtain the foreground picture of described image;
Morphological scale-space is carried out to the foreground picture;
Connected region extraction operation is carried out to obtain what the foreground picture was included to carrying out the foreground picture after Morphological scale-space
Moving target;The movement locus of the moving target is determined based on the corresponding Motion mask of the moving target;
Wherein, the movement locus that the moving target is determined based on the corresponding Motion mask of the moving target, including:
Calculate the Euclidean distance between each Motion mask that the moving target and Motion mask are concentrated;
The Euclidean distance between each Motion mask concentrated based on the moving target for calculating and Motion mask,
Determine the matching degree between each Motion mask among the moving target and the Motion mask collection;
If determining the first Motion mask matched with the moving target based on the matching degree, by the ginseng of the moving target
Examination point coordinate is added in the movement locus array in first Motion mask, based on the motion in first Motion mask
The reference point coordinates of current record in the array of track determines the movement locus of the moving target, wherein, the first motion mould
Plate is the one of Motion mask among the Motion mask collection;
Wherein, the reference point coordinates of current record determines institute in the movement locus array based in first Motion mask
The movement locus of moving target is stated, including:
P1 of earliest record in the movement locus array in first Motion mask is calculated with reference to two neighboring in point coordinates
It is adjacent in the calculating P1-1 direction gradient to obtain P1-1 direction gradient with reference to the direction gradient between point coordinates
The angle of direction gradient, to obtain P1-2 angle, the P1 is the positive integer more than 2;
P2 recorded the latest in the movement locus array in first Motion mask is calculated with reference to two neighboring in point coordinates
It is adjacent in the calculating P2-1 direction gradient to obtain P2-1 direction gradient with reference to the direction gradient between point coordinates
The angle of direction gradient, to obtain P2-2 angle, the P2 is the positive integer more than 2;
If the angle quantity in the P1-2 angle more than the first angle threshold value is more than P3, and if big in the P2-2 angle
The reference point of the movement locus array record in the angle quantity of the second angle threshold value is more than P4, and first Motion mask
The area of the movement locus region corresponding to coordinate is more than the first area threshold, then using in first Motion mask
The reference point coordinates of current record in movement locus array draws the movement locus for obtaining the moving target.
2. method according to claim 1, it is characterised in that described Morphological scale-space is carried out to the foreground picture to include:
Following at least one Morphological scale-space is carried out to the foreground picture:Filtering process, dilation erosion treatment, opening operation process and close fortune
Calculation is processed.
3. method according to claim 1, it is characterised in that
Methods described also includes:
If determining that the moving target is equal with any one Motion mask that the Motion mask is concentrated based on the matching degree
Mismatch, generate corresponding first Motion mask of the moving target, first Motion mask is added to the motion mould
Among plate collection, wherein, the reference point coordinates of the moving target is have recorded in the movement locus array of first Motion mask.
4. method according to claim 1, it is characterised in that the calculating moving target is concentrated with Motion mask
Euclidean distance between each Motion mask, including:
Using Motion mask collection and moving target construction Distance matrix D ist;
Wherein, the line number m of the distance matrix is equal to the number of the current Motion mask for including of the Motion mask collection;It is described
Element D in DistiRepresent the Euclidean distance between the Motion mask i that the moving target and the Motion mask are concentrated.
5. method according to claim 4, it is characterised in that
The DiIt is calculated by equation below,
Wherein, the T is first threshold;
(the xj,yj) be the moving target reference point coordinates;
(the xi,yi) be the Motion mask i that the Motion mask is concentrated reference point coordinates;
(dx, the dy) is the predictive displacement of record in the Motion mask i.
6. method according to claim 5, it is characterised in that described based on the moving target for calculating and motion mould
The Euclidean distance between each Motion mask that plate is concentrated, determines each motion mould that the moving target is concentrated with Motion mask
Matching degree between plate, including:
Based on distance matrix construction matching matrix Match;
Wherein, the line number m of the matching matrix is equal to the number of the current Motion mask for including of the Motion mask collection;It is described
Element M in MatchiThe matching degree between the Motion mask i that the moving target and the Motion mask are concentrated is represented, its
In, the matching degree between the Motion mask i that the moving target and the Motion mask are concentrated, based on the moving target and institute
Euclidean distance determines between stating the Motion mask i of Motion mask concentration.
7. method according to claim 6, it is characterised in that
The MiIt is calculated by equation below,
8. the method according to any one of claim 1 to 7, it is characterised in that if described determined based on the matching degree
After the first Motion mask matched with the moving target, also include:To the prediction bits included in first Motion mask
(dx, dy) is moved to be updated as follows:
Wherein (xi1,yi1) be the moving target reference point coordinates, (xi,yi) it is the described first motion mould
The reference point coordinates of the movement locus array of plate newest addition before the reference point coordinates of the moving target is added.
9. the method according to any one of claim 1 to 7, it is characterised in that determined based on the matching degree described
After the first Motion mask matched with the moving target,
Methods described also includes:
First Motion mask that will be recorded in first Motion mask turns into the confidence level of motion tracking template
Threshold_In1 adds s1;
Will the Motion mask concentrate record in other Motion masks in addition to first Motion mask as motion with
The confidence level Threshold_In1 of track template subtracts s1, the confidence level that the Motion mask recorded in described other Motion masks disappears
Threshold_In2 subtracts s2, wherein, the s1 is positive integer, and the s2 is positive integer.
10. the method according to any one of claim 1 to 7, it is characterised in that point coordinates centered on the reference point coordinates
Or barycenter point coordinates.
A kind of 11. motion target tracking devices, it is characterised in that including:
Acquiring unit, for obtaining image from video sequence;
Obtaining unit, the foreground picture for obtaining described image;
Processing unit, for carrying out Morphological scale-space to the foreground picture;
Extraction unit, for carrying out connected region to extract operation described to obtain to carrying out the foreground picture after Morphological scale-space
The moving target that foreground picture is included;
Tracking treatment unit, the motion rail for determining the moving target based on the corresponding Motion mask of the moving target
Mark;
Wherein, the tracking treatment unit is specifically for calculating each Motion mask that the moving target is concentrated with Motion mask
Between Euclidean distance;Institute between each Motion mask concentrated based on the moving target for calculating and Motion mask
Euclidean distance is stated, the matching degree between each Motion mask among the moving target and the Motion mask collection is determined;If base
The first Motion mask matched with the moving target is determined in the matching degree, by the reference point coordinates of the moving target
It is added in the movement locus array in first Motion mask, based on the movement locus array in first Motion mask
The reference point coordinates of middle current record determines the movement locus of the moving target, wherein, first Motion mask is described
One of Motion mask among Motion mask collection;
Wherein, the reference point coordinates of current record determines in the movement locus array based in first Motion mask
The aspect of the movement locus of the moving target, the tracking treatment unit specifically for,
P1 of earliest record in the movement locus array in first Motion mask is calculated with reference to two neighboring in point coordinates
It is adjacent in the calculating P1-1 direction gradient to obtain P1-1 direction gradient with reference to the direction gradient between point coordinates
The angle of direction gradient, to obtain P1-2 angle, the P1 is the positive integer more than 2;
P2 recorded the latest in the movement locus array in first Motion mask is calculated with reference to two neighboring in point coordinates
It is adjacent in the calculating P2-1 direction gradient to obtain P2-1 direction gradient with reference to the direction gradient between point coordinates
The angle of direction gradient, to obtain P2-2 angle, the P2 is the positive integer more than 2;
If the angle quantity in the P1-2 angle more than the first angle threshold value is more than P3, and if big in the P2-2 angle
The reference point of the movement locus array record in the angle quantity of the second angle threshold value is more than P4, and first Motion mask
The area of the movement locus region corresponding to coordinate is more than the first area threshold, then using in first Motion mask
The reference point coordinates of current record in movement locus array draws the movement locus for obtaining the moving target.
12. devices according to claim 11, it is characterised in that
The tracking treatment unit is additionally operable to, if determining the moving target with the Motion mask collection based on the matching degree
In any one Motion mask mismatch, generate corresponding first Motion mask of the moving target, described first will transport
Moving platen is added among the Motion mask collection, wherein, have recorded institute in the movement locus array of first Motion mask
State the reference point coordinates of moving target.
13. devices according to claim 11, it is characterised in that calculate the moving target and Motion mask collection described
In each Motion mask between Euclidean distance aspect, the tracking treatment unit specifically for, using Motion mask collection and
The moving target construction Distance matrix D ist;
Wherein, the line number m of the distance matrix is equal to the number of the current Motion mask for including of the Motion mask collection;It is described
Element D in DistiRepresent the Euclidean distance between the Motion mask i that the moving target and the Motion mask are concentrated.
14. devices according to claim 13, it is characterised in that
The DiIt is calculated by equation below,
Wherein, the T is first threshold;
(the xj,yj) be the moving target reference point coordinates;
(the xi,yi) be the Motion mask i that the Motion mask is concentrated reference point coordinates;
(dx, the dy) is the predictive displacement of record in the Motion mask i.
15. devices according to claim 14, it is characterised in that it is described based on the moving target for calculating with fortune
The Euclidean distance between each Motion mask that moving platen is concentrated, determines each fortune that the moving target is concentrated with Motion mask
The aspect of the matching degree between moving platen, the tracking treatment unit is specifically for based on distance matrix construction matching square
Battle array Match;
Wherein, the line number m of the matching matrix is equal to the number of the current Motion mask for including of the Motion mask collection;It is described
Element M in MatchiThe matching degree between the Motion mask i that the moving target and the Motion mask are concentrated is represented, its
In, the matching degree between the Motion mask i that the moving target and the Motion mask are concentrated, based on the moving target and institute
Euclidean distance determines between stating the Motion mask i of Motion mask concentration.
16. devices according to claim 15, it is characterised in that
The MiIt is calculated by equation below,
17. device according to any one of claim 11 to 16, it is characterised in that if described determined based on the matching degree
Go out after the first Motion mask matched with the moving target, the tracking treatment unit is additionally operable to, moved to described first
The predictive displacement (dx, dy) included in template is updated as follows:
Wherein (xi1,yi1) be the moving target reference point coordinates, (xi,yi) it is the described first motion mould
The reference point coordinates of the movement locus array of plate newest addition before the reference point coordinates of the moving target is added.
18. device according to any one of claim 11 to 16, it is characterised in that determined based on the matching degree described
Go out after the first Motion mask matched with the moving target, the tracking treatment unit is additionally operable to, by the described first motion
The confidence level Threshold_In1 that first Motion mask recorded in template turns into motion tracking template adds s1;
Will the Motion mask concentrate record in other Motion masks in addition to first Motion mask as motion with
The confidence level Threshold_In1 of track template subtracts s1, the confidence level that the Motion mask recorded in described other Motion masks disappears
Threshold_In2 subtracts s2, wherein, the s1 is positive integer, and the s2 is positive integer.
19. device according to any one of claim 11 to 16, it is characterised in that put centered on the reference point coordinates and sat
Mark or barycenter point coordinates.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410373576.2A CN104156982B (en) | 2014-07-31 | 2014-07-31 | Motion target tracking method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410373576.2A CN104156982B (en) | 2014-07-31 | 2014-07-31 | Motion target tracking method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104156982A CN104156982A (en) | 2014-11-19 |
CN104156982B true CN104156982B (en) | 2017-06-13 |
Family
ID=51882471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410373576.2A Active CN104156982B (en) | 2014-07-31 | 2014-07-31 | Motion target tracking method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104156982B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105025198B (en) * | 2015-07-22 | 2019-01-01 | 东方网力科技股份有限公司 | A kind of group technology of the video frequency motion target based on Spatio-temporal factors |
CN108986151B (en) * | 2017-05-31 | 2021-12-03 | 华为技术有限公司 | Multi-target tracking processing method and equipment |
CN110636248B (en) * | 2018-06-22 | 2021-08-27 | 华为技术有限公司 | Target tracking method and device |
CN110245611B (en) * | 2019-06-14 | 2021-06-15 | 腾讯科技(深圳)有限公司 | Image recognition method and device, computer equipment and storage medium |
CN113326719A (en) * | 2020-02-28 | 2021-08-31 | 华为技术有限公司 | Method, equipment and system for target tracking |
CN113837143B (en) * | 2021-10-21 | 2022-07-05 | 广州微林软件有限公司 | Action recognition method |
-
2014
- 2014-07-31 CN CN201410373576.2A patent/CN104156982B/en active Active
Non-Patent Citations (2)
Title |
---|
一种基于优化模板匹配的红外目标跟踪算法;柳玉辉;《东北大学学报(自然科学版)》;20101031;第31卷(第10期);1389-1392 * |
运动目标的检测、定位与跟踪研究;方颖;《中国优秀硕士学位论文全文数据库》;20090115;正文第3章 * |
Also Published As
Publication number | Publication date |
---|---|
CN104156982A (en) | 2014-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104156982B (en) | Motion target tracking method and device | |
Mou et al. | RiFCN: Recurrent network in fully convolutional network for semantic segmentation of high resolution remote sensing images | |
CN112597941B (en) | Face recognition method and device and electronic equipment | |
CN109508684B (en) | Method for recognizing human behavior in video | |
CN110427905A (en) | Pedestrian tracting method, device and terminal | |
CN108875666A (en) | Acquisition methods, device, computer equipment and the storage medium of motion profile | |
CN111626184B (en) | Crowd density estimation method and system | |
CN103440667B (en) | The automaton that under a kind of occlusion state, moving target is stably followed the trail of | |
CN107633226A (en) | A kind of human action Tracking Recognition method and system | |
CN105303163B (en) | A kind of method and detection device of target detection | |
CN103955950B (en) | Image tracking method utilizing key point feature matching | |
CN103729861A (en) | Multiple object tracking method | |
CN110163207A (en) | One kind is based on Mask-RCNN ship target localization method and storage equipment | |
CN110008900A (en) | A kind of visible remote sensing image candidate target extracting method by region to target | |
Yang et al. | Intelligent video analysis: A Pedestrian trajectory extraction method for the whole indoor space without blind areas | |
Li et al. | Poisson reconstruction-based fusion of infrared and visible images via saliency detection | |
Zheng et al. | Remote sensing semantic segmentation via boundary supervision-aided multiscale channelwise cross attention network | |
CN115482523A (en) | Small object target detection method and system of lightweight multi-scale attention mechanism | |
Dong et al. | A deep learning based framework for remote sensing image ground object segmentation | |
Guo et al. | Monocular 3D multi-person pose estimation via predicting factorized correction factors | |
Dong et al. | ESA-Net: An efficient scale-aware network for small crop pest detection | |
CN104517292A (en) | Multi-camera high-density crowd partitioning method based on planar homography matrix restraint | |
Yin et al. | YOLO-EPF: Multi-scale smoke detection with enhanced pool former and multiple receptive fields | |
CN110570450B (en) | Target tracking method based on cascade context-aware framework | |
CN116523957A (en) | Multi-target tracking method, system, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |